Back to Search Start Over

GPT-4 passes the bar exam.

Authors :
Katz, Daniel Martin
Bommarito, Michael James
Gao, Shang
Arredondo, Pablo
Source :
Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences; 4/15/2024, Vol. 382 Issue 2270, p1-17, 17p
Publication Year :
2024

Abstract

In this paper, we experimentally evaluate the zero-shot performance of GPT-4 against prior generations of GPT on the entire uniform bar examination (UBE), including not only the multiple-choice multistate bar examination (MBE), but also the open-ended multistate essay exam (MEE) and multistate performance test (MPT) components. On the MBE, GPT-4 significantly outperforms both human test-takers and prior models, demonstrating a 26% increase over ChatGPT and beating humans in five of seven subject areas. On the MEE and MPT, which have not previously been evaluated by scholars, GPT-4 scores an average of 4.2/6.0 when compared with much lower scores for ChatGPT. Graded across the UBE components, in the manner in which a human test-taker would be, GPT-4 scores approximately 297 points, significantly in excess of the passing threshold for all UBE jurisdictions. These findings document not just the rapid and remarkable advance of large language model performance generally, but also the potential for such models to support the delivery of legal services in society. This article is part of the theme issue 'A complexity science approach to law and governance'. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1364503X
Volume :
382
Issue :
2270
Database :
Complementary Index
Journal :
Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences
Publication Type :
Academic Journal
Accession number :
175640255
Full Text :
https://doi.org/10.1098/rsta.2023.0254