Back to Search Start Over

From Symbolic Tasks to Code Generation: Diversification Yields Better Task Performers

Authors :
Zhang, Dylan
Wang, Justin
Charton, Francois
Publication Year :
2024

Abstract

Instruction tuning -- tuning large language models on instruction-output pairs -- is a promising technique for making models better adapted to the real world. Yet, the key factors driving the model's capability to understand and follow instructions not seen during training remain under-explored. Our investigation begins with a series of synthetic experiments within the theoretical framework of a Turing-complete algorithm called Markov algorithm, which allows fine-grained control over the instruction-tuning data. Generalization and robustness with respect to the training distribution emerge once a diverse enough set of tasks is provided, even though very few examples are provided for each task. We extend these initial results to a real-world application scenario of code generation and find that a more diverse instruction set, extending beyond code-related tasks, improves the performance of code generation. Our observations suggest that a more diverse semantic space for instruction-tuning sets greatly improves the model's ability to follow instructions and perform tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.19787
Document Type :
Working Paper