Diffusion bridges vector quantized Variational AutoEncoders - Centre de mathématiques appliquées (CMAP) Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2022

Diffusion bridges vector quantized Variational AutoEncoders

Résumé

Vector Quantised-Variational AutoEncoders (VQ-VAE) are generative models based on discrete latent representations of the data, where inputs are mapped to a finite set of learned embeddings. To generate new samples, an autoregressive prior distribution over the discrete states must be trained separately. This prior is generally very complex and leads to very slow generation. In this work, we propose a new model to train the prior and the encoder/decoder networks simultaneously. We build a diffusion bridge between a continuous coded vector and a non-informative prior distribution. The latent discrete states are then given as random functions of these continuous vectors. We show that our model is competitive with the autoregressive prior on the mini-Imagenet dataset and is very efficient in both optimization and sampling. Our framework also extends the standard VQ-VAE and enables end-to-end training.
Fichier principal
Vignette du fichier
vqvae.pdf (8.48 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03559417 , version 1 (09-02-2022)
hal-03559417 , version 2 (28-07-2022)

Identifiants

Citer

Max Cohen, Guillaume Quispe, Sylvain Le Corff, Charles Ollion, Eric Moulines. Diffusion bridges vector quantized Variational AutoEncoders. 2022. ⟨hal-03559417v1⟩
155 Consultations
244 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More