Back to Search Start Over

Towards Logically Consistent Language Models via Probabilistic Reasoning

Authors :
Calanzone, Diego
Teso, Stefano
Vergari, Antonio
Publication Year :
2024

Abstract

Large language models (LLMs) are a promising venue for natural language understanding and generation tasks. However, current LLMs are far from reliable: they are prone to generate non-factual information and, more crucially, to contradict themselves when prompted to reason about beliefs of the world. These problems are currently addressed with large scale fine-tuning or by delegating consistent reasoning to external tools. In this work, we strive for a middle ground and introduce a training objective based on principled probabilistic reasoning that teaches a LLM to be consistent with external knowledge in the form of a set of facts and rules. Fine-tuning with our loss on a limited set of facts enables our LLMs to be more logically consistent than previous baselines and allows them to extrapolate to unseen but semantically similar factual knowledge more systematically.<br />Comment: Accepted at ICLR 2024 Workshop on Reliable and Responsible Foundation Models

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.12843
Document Type :
Working Paper