[PDF][PDF] Opportunity cost in Bayesian optimization

J Snoek, H Larochelle, R Adams - NIPS Workshop on Bayesian …, 2011 - cs.ubc.ca
NIPS Workshop on Bayesian Optimization, Sequential Experimental Design, and …, 2011cs.ubc.ca
A major advantage of Bayesian optimization is that it generally requires fewer function
evaluations than optimization methods that do not exploit the intrinsic uncertainty associated
with the task. The ability to perform well with fewer evaluations of the target function makes
the Bayesian approach to optimization particularly compelling when that target distribution is
expensive to evaluate. The notion of expense, however, depends on the problem and may
even depend on the location in the search space. For example, we may be under a time …
Abstract
A major advantage of Bayesian optimization is that it generally requires fewer function evaluations than optimization methods that do not exploit the intrinsic uncertainty associated with the task. The ability to perform well with fewer evaluations of the target function makes the Bayesian approach to optimization particularly compelling when that target distribution is expensive to evaluate. The notion of expense, however, depends on the problem and may even depend on the location in the search space. For example, we may be under a time deadline and the experiments we wish to run may have varying duration, as when training neural networks or finding the hyperparameters of support vector machines. In this paper we develop a new idea for selecting experiments in this setting, that builds in information about the opportunity cost of some experiments over others. Specifically, we consider Bayesian optimization where 1) there are limited resources, 2) function evaluations vary in resource cost across the search space, and 3) the costs are unknown and must be learned.
cs.ubc.ca
Showing the best result for this search. See all results