Back to Search Start Over

SciCode: A Research Coding Benchmark Curated by Scientists

Authors :
Tian, Minyang
Gao, Luyu
Zhang, Shizhuo Dylan
Chen, Xinan
Fan, Cunwei
Guo, Xuefei
Haas, Roland
Ji, Pan
Krongchon, Kittithat
Li, Yao
Liu, Shengyan
Luo, Di
Ma, Yutao
Tong, Hao
Trinh, Kha
Tian, Chenyu
Wang, Zihan
Wu, Bohao
Xiong, Yanyu
Yin, Shengzhu
Zhu, Minhui
Lieret, Kilian
Lu, Yanxin
Liu, Genglin
Du, Yufeng
Tao, Tianhua
Press, Ofir
Callan, Jamie
Huerta, Eliu
Peng, Hao
Publication Year :
2024

Abstract

Since language models (LMs) now outperform average humans on many challenging tasks, it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations. We address this issue by examining LMs' capabilities to generate code for solving real scientific research problems. Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields, including mathematics, physics, chemistry, biology, and materials science, we created a scientist-curated coding benchmark, SciCode. The problems in SciCode naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems. It offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, can solve only 4.6% of the problems in the most realistic setting. We believe that SciCode demonstrates both contemporary LMs' progress towards becoming helpful scientific assistants and sheds light on the development and evaluation of scientific AI in the future.<br />Comment: 25 pages, 9 figures, 7 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.13168
Document Type :
Working Paper