A note on the evaluation of generative models

L Theis, A Oord, M Bethge - arXiv preprint arXiv:1511.01844, 2015 - arxiv.org
arXiv preprint arXiv:1511.01844, 2015arxiv.org
Probabilistic generative models can be used for compression, denoising, inpainting, texture
synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given
this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way
these models are formulated, trained, and evaluated. As a consequence, direct comparison
between models is often difficult. This article reviews mostly known but often
underappreciated properties relating to the evaluation and interpretation of generative …
Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.
arxiv.org
Showing the best result for this search. See all results