Back to Search Start Over

Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

Authors :
Zhang, Xiaoying
Peng, Baolin
Tian, Ye
Zhou, Jingyan
Jin, Lifeng
Song, Linfeng
Mi, Haitao
Meng, Helen
Source :
ACL2024 Main
Publication Year :
2024

Abstract

Despite showing increasingly human-like abilities, large language models (LLMs) often struggle with factual inaccuracies, i.e. "hallucinations", even when they hold relevant knowledge. To address these hallucinations, current approaches typically necessitate high-quality human factuality annotations. In this work, we explore Self-Alignment for Factuality, where we leverage the self-evaluation capability of an LLM to provide training signals that steer the model towards factuality. Specifically, we incorporate Self-Eval, a self-evaluation component, to prompt an LLM to validate the factuality of its own generated responses solely based on its internal knowledge. Additionally, we design Self-Knowledge Tuning (SK-Tuning) to augment the LLM's self-evaluation ability by improving the model's confidence estimation and calibration. We then utilize these self-annotated responses to fine-tune the model via Direct Preference Optimization algorithm. We show that the proposed self-alignment approach substantially enhances factual accuracy over Llama family models across three key knowledge-intensive tasks on TruthfulQA and BioGEN.<br />Comment: 20 pages

Details

Database :
arXiv
Journal :
ACL2024 Main
Publication Type :
Report
Accession number :
edsarx.2402.09267
Document Type :
Working Paper