Back to Search Start Over

Code Simulation Challenges for Large Language Models

Authors :
La Malfa, Emanuele
Weinhuber, Christoph
Torre, Orazio
Lin, Fangru
Marro, Samuele
Cohn, Anthony
Shadbolt, Nigel
Wooldridge, Michael
Publication Year :
2024

Abstract

Many reasoning, planning, and problem-solving tasks share an intrinsic algorithmic nature: correctly simulating each step is a sufficient condition to solve them correctly. This work studies to what extent Large Language Models (LLMs) can simulate coding and algorithmic tasks to provide insights into general capabilities in such algorithmic reasoning tasks. We introduce benchmarks for straight-line programs, code that contains critical paths, and approximate and redundant instructions. We further assess the simulation capabilities of LLMs with sorting algorithms and nested loops and show that a routine's computational complexity directly affects an LLM's ability to simulate its execution. While the most powerful LLMs exhibit relatively strong simulation capabilities, the process is fragile, seems to rely heavily on pattern recognition, and is affected by memorisation. We propose a novel off-the-shelf prompting method, Chain of Simulation (CoSm), which instructs LLMs to simulate code execution line by line/follow the computation pattern of compilers. CoSm efficiently helps LLMs reduce memorisation and shallow pattern recognition while improving simulation performance. We consider the success of CoSm in code simulation to be inspirational for other general routine simulation reasoning tasks.<br />Comment: Code: https://github.com/EmanueleLM/CodeSimulation

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.09074
Document Type :
Working Paper