Dropout training as adaptive regularization

S Wager, S Wang, PS Liang - Advances in neural …, 2013 - proceedings.neurips.cc
Advances in neural information processing systems, 2013proceedings.neurips.cc
Dropout and other feature noising schemes control overfitting by artificially corrupting the
training data. For generalized linear models, dropout performs a form of adaptive
regularization. Using this viewpoint, we show that the dropout regularizer is first-order
equivalent to an $\LII $ regularizer applied after scaling the features by an estimate of the
inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an
online learner, and find that a close relative of AdaGrad operates by repeatedly solving …
Abstract
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an $\LII $ regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learner, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.
proceedings.neurips.cc
Showing the best result for this search. See all results