Efficient optimization of loops and limits with randomized telescoping sums

A Beatson, RP Adams - International Conference on …, 2019 - proceedings.mlr.press
International Conference on Machine Learning, 2019proceedings.mlr.press
We consider optimization problems in which the objective requires an inner loop with many
steps or is the limit of a sequence of increasingly costly approximations. Meta-learning,
training recurrent neural networks, and optimization of the solutions to differential equations
are all examples of optimization problems with this character. In such problems, it can be
expensive to compute the objective function value and its gradient, but truncating the loop or
using less accurate approximations can induce biases that damage the overall solution. We …
Abstract
We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations. Meta-learning, training recurrent neural networks, and optimization of the solutions to differential equations are all examples of optimization problems with this character. In such problems, it can be expensive to compute the objective function value and its gradient, but truncating the loop or using less accurate approximations can induce biases that damage the overall solution. We propose randomized telescope (RT) gradient estimators, which represent the objective as the sum of a telescoping series and sample linear combinations of terms to provide cheap unbiased gradient estimates. We identify conditions under which RT estimators achieve optimization convergence rates independent of the length of the loop or the required accuracy of the approximation. We also derive a method for tuning RT estimators online to maximize a lower bound on the expected decrease in loss per unit of computation. We evaluate our adaptive RT estimators on a range of applications including meta-optimization of learning rates, variational inference of ODE parameters, and training an LSTM to model long sequences.
proceedings.mlr.press
Showing the best result for this search. See all results