Back to Search Start Over

A Novel Approach to Eliminating Hallucinations in Large Language Model-Assisted Causal Discovery

Authors :
Sng, Grace
Zhang, Yanming
Mueller, Klaus
Publication Year :
2024

Abstract

The increasing use of large language models (LLMs) in causal discovery as a substitute for human domain experts highlights the need for optimal model selection. This paper presents the first hallucination survey of popular LLMs for causal discovery. We show that hallucinations exist when using LLMs in causal discovery so the choice of LLM is important. We propose using Retrieval Augmented Generation (RAG) to reduce hallucinations when quality data is available. Additionally, we introduce a novel method employing multiple LLMs with an arbiter in a debate to audit edges in causal graphs, achieving a comparable reduction in hallucinations to RAG.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.12759
Document Type :
Working Paper