Back to Search Start Over

The Magic of IF: Investigating Causal Reasoning Abilities in Large Language Models of Code

Authors :
Liu, Xiao
Yin, Da
Zhang, Chen
Feng, Yansong
Zhao, Dongyan
Publication Year :
2023

Abstract

Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking. Although large language models (LLMs) succeed in many NLP tasks, it is still challenging for them to conduct complex causal reasoning like abductive reasoning and counterfactual reasoning. Given the fact that programming code may express causal relations more often and explicitly with conditional statements like ``if``, we want to explore whether Code-LLMs acquire better causal reasoning abilities. Our experiments show that compared to text-only LLMs, Code-LLMs with code prompts are significantly better in causal reasoning. We further intervene on the prompts from different aspects, and discover that the programming structure is crucial in code prompt design, while Code-LLMs are robust towards format perturbations.<br />Comment: Findings of ACL 2023. Code and data are available at https://github.com/xxxiaol/magic-if

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.19213
Document Type :
Working Paper