CISPA
Browse
- No file added yet -

Why Random Pruning Is All We Need to Start Sparse

Download (1.03 MB)
conference contribution
posted on 2024-03-05, 12:16 authored by Advait GadhikarAdvait Gadhikar, Mukherjee, Sohom, Rebekka BurkholzRebekka Burkholz
Random masks define surprisingly effective sparse neural network models, as has been shown empirically. The resulting sparse networks can often compete with dense architectures and state-of-the-art lottery ticket pruning algorithms, even though they do not rely on computationally expensive prune-train iterations and can be drawn initially without significant computational over-head. We offer a theoretical explanation of how random masks can approximate arbitrary target networks if they are wider by a logarithmic factor in the inverse sparsity 1/ log(1/sparsity). This overparameterization factor is necessary at least for 3-layer random networks, which elucidates the observed degrading performance of random networks at higher sparsity. At moderate to high sparsity levels, however, our results imply that sparser networks are contained within random source networks so that any dense-to-sparse training scheme can be turned into a computationally more efficient sparse to sparse one by constraining the search to a fixed random mask. We demonstrate the feasibility of this approach in experiments for different pruning methods and propose particularly effective choices of initial layer-wise sparsity ratios of the random source network. As a special case, we show theoretically and experimentally that random source networks also contain strong lottery tickets. Our code is available at https://github.com/RelationalML/sparse_to_sparse.

History

Preferred Citation

Advait Gadhikar, Sohom Mukherjee, Rebekka Burkholz. Why Random Pruning Is All We Need to Start Sparse. In: International Conference on Machine Learning. 2023.

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Machine Learning (ICML)

Legacy Posted Date

2023-09-06

Open Access Type

  • Repository

BibTeX

@inproceedings{cispa_all_4026, author = {Advait Gadhikar AND Sohom Mukherjee AND Rebekka Burkholz}, title = {Why Random Pruning Is All We Need to Start Sparse}, booktitle = {International Conference on Machine Learning}, year = {2023} }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC