Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates

K Balasubramanian, S Ghadimi - Advances in Neural …, 2018 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2018proceedings.neurips.cc
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for
nonconvex and convex optimization. Specifically, we propose generalizations of the
conditional gradient algorithm achieving rates similar to the standard stochastic gradient
algorithm using only zeroth-order information. Furthermore, under a structural sparsity
assumption, we first illustrate an implicit regularization phenomenon where the standard
stochastic gradient algorithm with zeroth-order information adapts to the sparsity of the …
Abstract
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex and convex optimization. Specifically, we propose generalizations of the conditional gradient algorithm achieving rates similar to the standard stochastic gradient algorithm using only zeroth-order information. Furthermore, under a structural sparsity assumption, we first illustrate an implicit regularization phenomenon where the standard stochastic gradient algorithm with zeroth-order information adapts to the sparsity of the problem at hand by just varying the step-size. Next, we propose a truncated stochastic gradient algorithm with zeroth-order information, whose rate of convergence depends only poly-logarithmically on the dimensionality.
proceedings.neurips.cc
Showing the best result for this search. See all results