Back to Search Start Over

HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes

Authors :
Takida, Yuhta
Ikemiya, Yukara
Shibuya, Takashi
Shimada, Kazuki
Choi, Woosung
Lai, Chieh-Hsin
Murata, Naoki
Uesaka, Toshimitsu
Uchida, Kengo
Liao, Wei-Hsiang
Mitsufuji, Yuki
Publication Year :
2023

Abstract

Vector quantization (VQ) is a technique to deterministically learn features with discrete codebook representations. It is commonly performed with a variational autoencoding model, VQ-VAE, which can be further extended to hierarchical structures for making high-fidelity reconstructions. However, such hierarchical extensions of VQ-VAE often suffer from the codebook/layer collapse issue, where the codebook is not efficiently used to express the data, and hence degrades reconstruction accuracy. To mitigate this problem, we propose a novel unified framework to stochastically learn hierarchical discrete representation on the basis of the variational Bayes framework, called hierarchically quantized variational autoencoder (HQ-VAE). HQ-VAE naturally generalizes the hierarchical variants of VQ-VAE, such as VQ-VAE-2 and residual-quantized VAE (RQ-VAE), and provides them with a Bayesian training scheme. Our comprehensive experiments on image datasets show that HQ-VAE enhances codebook usage and improves reconstruction performance. We also validated HQ-VAE in terms of its applicability to a different modality with an audio dataset.<br />Comment: 34 pages with 17 figures, accepted for TMLR

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.00365
Document Type :
Working Paper