CISPA
Browse
cispa_all_3600.pdf (4.69 MB)

Masked Training of Neural Networks with Partial Gradients

Download (4.69 MB)
Version 2 2023-12-14, 12:32
Version 1 2023-11-29, 18:20
conference contribution
posted on 2023-12-14, 12:32 authored by Amirkeivan Mohtashami, Martin Jaggi, Sebastian StichSebastian Stich
State-of-the-art training algorithms for deep learning models are based on stochastic gradient descent (SGD). Recently, many variations have been explored: perturbing parameters for better accuracy (such as in Extragradient), limiting SGD updates to a subset of parameters for increased efficiency (such as meProp) or a combination of both (such as Dropout). However, the convergence of these methods is often not studied in theory. We propose a unified theoretical framework to study such SGD variants -- encompassing the aforementioned algorithms and additionally a broad variety of methods used for communication efficient training or model compression. Our insights can be used as a guide to improve the efficiency of such methods and facilitate generalization to new applications. As an example, we tackle the task of jointly training networks, a version of which (limited to sub-networks) is used to create Slimmable Networks. By training a low-rank Transformer jointly with a standard one we obtain superior performance than when it is trained separately.

History

Preferred Citation

Amirkeivan Mohtashami, Martin Jaggi and Sebastian Stich. Masked Training of Neural Networks with Partial Gradients. In: International Conference on Artificial Intelligence and Statistics (AISTATS). 2022.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Artificial Intelligence and Statistics (AISTATS)

Legacy Posted Date

2022-04-05

Open Access Type

  • Green

BibTeX

@inproceedings{cispa_all_3600, title = "Masked Training of Neural Networks with Partial Gradients", author = "Mohtashami, Amirkeivan and Jaggi, Martin and Stich, Sebastian U.", booktitle="{International Conference on Artificial Intelligence and Statistics (AISTATS)}", year="2022", }

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC