Back to Search Start Over

A Chat About Boring Problems: Studying GPT-based text normalization

Authors :
Zhang, Yang
Bartley, Travis M.
Graterol-Fuenmayor, Mariana
Lavrukhin, Vitaly
Bakhturina, Evelina
Ginsburg, Boris
Publication Year :
2023

Abstract

Text normalization - the conversion of text from written to spoken form - is traditionally assumed to be an ill-formed task for language models. In this work, we argue otherwise. We empirically show the capacity of Large-Language Models (LLM) for text normalization in few-shot scenarios. Combining self-consistency reasoning with linguistic-informed prompt engineering, we find LLM based text normalization to achieve error rates around 40\% lower than top normalization systems. Further, upon error analysis, we note key limitations in the conventional design of text normalization tasks. We create a new taxonomy of text normalization errors and apply it to results from GPT-3.5-Turbo and GPT-4.0. Through this new framework, we can identify strengths and weaknesses of GPT-based TN, opening opportunities for future work.<br />Comment: Accepted to ICASSP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.13426
Document Type :
Working Paper