1. OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs
- Author
-
Asai, Akari, He, Jacqueline, Shao, Rulin, Shi, Weijia, Singh, Amanpreet, Chang, Joseph Chee, Lo, Kyle, Soldaini, Luca, Feldman, Sergey, D'arcy, Mike, Wadden, David, Latzke, Matt, Tian, Minyang, Ji, Pan, Liu, Shengyan, Tong, Hao, Wu, Bohao, Xiong, Yanyu, Zettlemoyer, Luke, Neubig, Graham, Weld, Dan, Downey, Doug, Yih, Wen-tau, Koh, Pang Wei, and Hajishirzi, Hannaneh
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Digital Libraries ,Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
Scientific progress depends on researchers' ability to synthesize the growing body of literature. Can large language models (LMs) assist scientists in this task? We introduce OpenScholar, a specialized retrieval-augmented LM that answers scientific queries by identifying relevant passages from 45 million open-access papers and synthesizing citation-backed responses. To evaluate OpenScholar, we develop ScholarQABench, the first large-scale multi-domain benchmark for literature search, comprising 2,967 expert-written queries and 208 long-form answers across computer science, physics, neuroscience, and biomedicine. On ScholarQABench, OpenScholar-8B outperforms GPT-4o by 5% and PaperQA2 by 7% in correctness, despite being a smaller, open model. While GPT4o hallucinates citations 78 to 90% of the time, OpenScholar achieves citation accuracy on par with human experts. OpenScholar's datastore, retriever, and self-feedback inference loop also improves off-the-shelf LMs: for instance, OpenScholar-GPT4o improves GPT-4o's correctness by 12%. In human evaluations, experts preferred OpenScholar-8B and OpenScholar-GPT4o responses over expert-written ones 51% and 70% of the time, respectively, compared to GPT4o's 32%. We open-source all of our code, models, datastore, data and a public demo.
- Published
- 2024