Toward multimodal model-agnostic meta-learning

R Vuorio, SH Sun, H Hu, JJ Lim - arXiv preprint arXiv:1812.07172, 2018 - arxiv.org
arXiv preprint arXiv:1812.07172, 2018arxiv.org
Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar
tasks to adapt to novel tasks from the same distribution with few gradient updates. One
important limitation of such frameworks is that they seek a common initialization shared
across the entire task distribution, substantially limiting the diversity of the task distributions
that they are able to learn from. In this paper, we augment MAML with the capability to
identify tasks sampled from a multimodal task distribution and adapt quickly through gradient …
Gradient-based meta-learners such as MAML are able to learn a meta-prior from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. One important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML algorithm that is able to modulate its meta-learned prior according to the identified task, allowing faster adaptation. We evaluate the proposed model on a diverse set of problems including regression, few-shot image classification, and reinforcement learning. The results demonstrate the effectiveness of our model in modulating the meta-learned prior in response to the characteristics of tasks sampled from a multimodal distribution.
arxiv.org
Showing the best result for this search. See all results