The stochastic proximal gradient method is a powerful generalization of the
widely used stochastic gradient descent (SGD) method and has found numerous
applications in Machine Learning. However, it is notoriously known that this
method fails to converge in non-convex settings where the stochastic noise is
significant (i.e. when only small or bounded batch sizes are used). In this
paper, we focus on the stochastic proximal gradient method with Polyak
momentum. We prove this method attains an optimal convergence rate for
non-convex composite optimization problems, regardless of batch size.
Additionally, we rigorously analyze the variance reduction effect of the Polyak
momentum in the composite optimization setting and we show the method also
converges when the proximal step can only be solved inexactly. Finally, we
provide numerical experiments to validate our theoretical results.
History
Primary Research Area
Trustworthy Information Processing
Name of Conference
International Conference on Machine Learning (ICML)
Open Access Type
Green
BibTeX
@conference{Gao:Rodomanov:Stich:2024,
title = "Non-Convex Stochastic Composite Optimization with Polyak Momentum",
author = "Gao, Yuan" AND "Rodomanov, Anton" AND "Stich, Sebastian U",
year = 2024,
month = 7,
doi = "10.48550/arxiv.2403.02967"
}