151. Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation
- Author
-
Gao, Dawei, Wang, Haibin, Li, Yaliang, Sun, Xiuyu, Qian, Yichen, Ding, Bolin, and Zhou, Jingren
- Subjects
Computer Science - Databases ,Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL task. However, the absence of a systematical benchmark inhibits the development of designing effective, efficient and economic LLM-based Text-to-SQL solutions. To address this challenge, in this paper, we first conduct a systematical and extensive comparison over existing prompt engineering methods, including question representation, example selection and example organization, and with these experimental results, we elaborate their pros and cons. Based on these findings, we propose a new integrated solution, named DAIL-SQL, which refreshes the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To explore the potential of open-source LLM, we investigate them in various scenarios, and further enhance their performance with supervised fine-tuning. Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning. Additionally, towards an efficient and economic LLM-based Text-to-SQL solution, we emphasize the token efficiency in prompt engineering and compare the prior studies under this metric. We hope that our work provides a deeper understanding of Text-to-SQL with LLMs, and inspires further investigations and broad applications., Comment: We have released code on https://github.com/BeachWang/DAIL-SQL
- Published
- 2023