CISPA
Browse

A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery

Download (685.08 kB)
conference contribution
posted on 2024-10-15, 13:10 authored by Lingxiao Wang, Xiao ZhangXiao Zhang, Quanquan Gu
We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery. Starting from an appropriate initial estimator, our proposed algorithm performs projected gradient descent based on a novel semi-stochastic gradient specifically designed for low-rank matrix recovery. Based upon the mild restricted strong convexity and smoothness conditions, we derive a projected notion of the restricted Lipschitz continuous gradient property, and prove that our algorithm enjoys linear convergence rate to the unknown low-rank matrix with an improved computational complexity. Moreover, our algorithm can be employed to both noiseless and noisy observations, where the optimal sample complexity and the minimax optimal statistical rate can be attained respectively. We further illustrate the superiority of our generic framework through several specific examples, both theoretically and experimentally.

History

Primary Research Area

  • Trustworthy Information Processing

Name of Conference

International Conference on Machine Learning (ICML)

CISPA Affiliation

  • No

BibTeX

@conference{Wang:Zhang:Gu:2017, title = "A Universal Variance Reduction-Based Catalyst for Nonconvex Low-Rank Matrix Recovery", author = "Wang, Lingxiao" AND "Zhang, Xiao" AND "Gu, Quanquan", year = 2017, month = 8 }

Usage metrics

    Categories

    No categories selected

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC