[PDF][PDF] Isolating sources of disentanglement in variational autoencoders

RTQ Chen, X Li, RB Grosse… - Advances in neural …, 2018 - proceedings.neurips.cc
Advances in neural information processing systems, 2018proceedings.neurips.cc
We decompose the evidence lower bound to show the existence of a term measuring the
total correlation between latent variables. We use this to motivate the beta-TCVAE (Total
Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the
beta-VAE for learning disentangled representations, requiring no additional
hyperparameters during training. We further propose a principled classifier-free measure of
disentanglement called the mutual information gap (MIG). We perform extensive quantitative …
Abstract
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the beta-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the beta-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.
proceedings.neurips.cc
Showing the best result for this search. See all results