A moment-matching metric for latent variable generative models
Cédric Beaulac
Published in Machine Learning.
Abstract :
It is difficult to assess the quality of a fitted model when facing unsupervised learning problems. Latent variable models, such as variational autoencoders and Gaussian mixture models, are often trained with likelihood-based approaches. In the scope of Goodhart’s law, when a metric becomes a target it ceases to be a good metric and therefore we should not use likelihood to assess the quality of the fit of these models. The solution we propose is a new metric for model comparison or regularization that relies on moments. The key idea to study the difference between the data moments and the model moments using a matrix norm, such as the Frobenius norm. We show how to use this new metric for model comparison and then for regularization. We show that our proposed metric is faster to compute and has a smaller variance than the commonly used procedure of drawing samples from the fitted distribution. We conclude this article with a proof of concept for both applications and we discuss future work.
The finalized published paper is publicly available here. Download a pre-print here.
Access the GitHub repo here.