1. Compositional Hardness of Code in Large Language Models -- A Probabilistic Perspective
- Author
-
Wolf, Yotam, Rothberg, Binyamin, Shteyman, Dorin, and Shashua, Amnon
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
A common practice in large language model (LLM) usage for complex analytical tasks such as code generation, is to sample a solution for the entire task within the model's context window. Previous works have shown that subtask decomposition within the model's context (chain of thought), is beneficial for solving such tasks. In this work, we point a limitation of LLMs' ability to perform several sub-tasks within the same context window - an in-context hardness of composition, pointing to an advantage for distributing a decomposed problem in a multi-agent system of LLMs. The hardness of composition is quantified by a generation complexity metric, i.e., the number of LLM generations required to sample at least one correct solution. We find a gap between the generation complexity of solving a compositional problem within the same context relative to distributing it among multiple agents, that increases exponentially with the solution's length. We prove our results theoretically and demonstrate them empirically.
- Published
- 2024