Meta-learning with latent embedding optimization

AA Rusu, D Rao, J Sygnowski, O Vinyals… - arXiv preprint arXiv …, 2018 - arxiv.org
arXiv preprint arXiv:1807.05960, 2018arxiv.org
Gradient-based meta-learning techniques are both widely applicable and proficient at
solving challenging few-shot learning and fast adaptation problems. However, they have
practical difficulties when operating on high-dimensional parameter spaces in extreme low-
data regimes. We show that it is possible to bypass these limitations by learning a data-
dependent latent generative representation of model parameters, and performing gradient-
based meta-learning in this low-dimensional latent space. The resulting approach, latent …
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.
arxiv.org
Showing the best result for this search. See all results