Back to Search Start Over

A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large Language Models

Authors :
Zhou, Houquan
Li, Zhenghua
Zhang, Bo
Li, Chen
Lai, Shaopeng
Zhang, Ji
Huang, Fei
Zhang, Min
Publication Year :
2024

Abstract

This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches. The key idea is to use an LLM as a pure language model in a conventional manner. The LLM goes through the input sentence from the beginning, and at each inference step, produces a distribution over its vocabulary for deciding the next token, given a partial sentence. To ensure that the output sentence remains faithful to the input sentence, we design a minimal distortion model that utilizes pronunciation or shape similarities between the original and replaced characters. Furthermore, we propose two useful reward strategies to address practical challenges specific to the CSC task. Experiments on five public datasets demonstrate that our approach significantly improves LLM performance, enabling them to compete with state-of-the-art domain-general CSC models.<br />Comment: Accepted at Main Conference of EMNLP 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.04027
Document Type :
Working Paper