Back to Search Start Over

Tailoring Language Generation Models under Total Variation Distance

Authors :
Ji, Haozhe
Ke, Pei
Hu, Zhipeng
Zhang, Rongsheng
Huang, Minlie
Source :
International Conference on Learning Representations (ICLR 2023)
Publication Year :
2023

Abstract

The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method. From a distributional view, MLE in fact minimizes the Kullback-Leibler divergence (KLD) between the distribution of the real data and that of the model. However, this approach forces the model to distribute non-zero (sometimes large) probability mass to all training samples regardless of their quality. Moreover, in the attempt to cover the low-probability regions in the data distribution, the model systematically overestimates the probability of corrupted text sequences, which we conjecture is one of the main reasons for text degeneration during autoregressive decoding. To remedy this problem, we leverage the total variation distance (TVD) with its robustness to outliers, and develop practical bounds to apply it to language generation. Then, we introduce the TaiLr objective that balances the tradeoff of estimating TVD. Intuitively, TaiLr downweights real data samples that have low model probabilities with tunable penalization intensity. Experimental results show that our method alleviates the overestimation of degenerated sequences without sacrificing diversity and improves generation quality on a wide range of text generation tasks.<br />Comment: Published in ICLR 2023 (notable-top-5%)

Details

Database :
arXiv
Journal :
International Conference on Learning Representations (ICLR 2023)
Publication Type :
Report
Accession number :
edsarx.2302.13344
Document Type :
Working Paper