1. Are large language models superhuman chemists?
- Author
-
Mirza, Adrian, Alampara, Nawaf, Kunchapu, Sreekanth, Ríos-García, Martiño, Emoekabu, Benedict, Krishnan, Aswanth, Gupta, Tanya, Schilling-Wilhelmi, Mara, Okereke, Macjonathan, Aneesh, Anagha, Elahi, Amir Mohammad, Asgari, Mehrdad, Eberhardt, Juliane, Elbeheiry, Hani M., Gil, María Victoria, Greiner, Maximilian, Holick, Caroline T., Glaubitz, Christina, Hoffmann, Tim, Ibrahim, Abdelrahman, Klepsch, Lea C., Köster, Yannik, Kreth, Fabian Alexander, Meyer, Jakob, Miret, Santiago, Peschel, Jan Matthias, Ringleb, Michael, Roesner, Nicole, Schreiber, Johanna, Schubert, Ulrich S., Stafast, Leanne M., Wonanke, Dinga, Pieler, Michael, Schwaller, Philippe, and Jablonka, Kevin Maik
- Subjects
Computer Science - Machine Learning ,Condensed Matter - Materials Science ,Computer Science - Artificial Intelligence ,Physics - Chemical Physics - Abstract
Large language models (LLMs) have gained widespread interest due to their ability to process human language and perform tasks on which they have not been explicitly trained. However, we possess only a limited systematic understanding of the chemical capabilities of LLMs, which would be required to improve models and mitigate potential harm. Here, we introduce "ChemBench," an automated framework for evaluating the chemical knowledge and reasoning abilities of state-of-the-art LLMs against the expertise of chemists. We curated more than 2,700 question-answer pairs, evaluated leading open- and closed-source LLMs, and found that the best models outperformed the best human chemists in our study on average. However, the models struggle with some basic tasks and provide overconfident predictions. These findings reveal LLMs' impressive chemical capabilities while emphasizing the need for further research to improve their safety and usefulness. They also suggest adapting chemistry education and show the value of benchmarking frameworks for evaluating LLMs in specific domains.
- Published
- 2024