Back to Search
Start Over
CBEval: A framework for evaluating and interpreting cognitive biases in LLMs
- Publication Year :
- 2024
-
Abstract
- Rapid advancements in Large Language models (LLMs) has significantly enhanced their reasoning capabilities. Despite improved performance on benchmarks, LLMs exhibit notable gaps in their cognitive processes. Additionally, as reflections of human-generated data, these models have the potential to inherit cognitive biases, raising concerns about their reasoning and decision making capabilities. In this paper we present a framework to interpret, understand and provide insights into a host of cognitive biases in LLMs. Conducting our research on frontier language models we're able to elucidate reasoning limitations and biases, and provide reasoning behind these biases by constructing influence graphs that identify phrases and words most responsible for biases manifested in LLMs. We further investigate biases such as round number bias and cognitive bias barrier revealed when noting framing effect in language models.
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2412.03605
- Document Type :
- Working Paper