Back to Search Start Over

LAiW: A Chinese Legal Large Language Models Benchmark

Authors :
Dai, Yongfu
Feng, Duanyu
Huang, Jimin
Jia, Haochen
Xie, Qianqian
Zhang, Yifang
Han, Weiguang
Tian, Wei
Wang, Hao
Publication Year :
2023

Abstract

General and legal domain LLMs have demonstrated strong performance in various tasks of LegalAI. However, the current evaluations of these LLMs in LegalAI are defined by the experts of computer science, lacking consistency with the logic of legal practice, making it difficult to judge their practical capabilities. To address this challenge, we are the first to build the Chinese legal LLMs benchmark LAiW, based on the logic of legal practice. To align with the thinking process of legal experts and legal practice (syllogism), we divide the legal capabilities of LLMs from easy to difficult into three levels: basic information retrieval, legal foundation inference, and complex legal application. Each level contains multiple tasks to ensure a comprehensive evaluation. Through automated evaluation of current general and legal domain LLMs on our benchmark, we indicate that these LLMs may not align with the logic of legal practice. LLMs seem to be able to directly acquire complex legal application capabilities but perform poorly in some basic tasks, which may pose obstacles to their practical application and acceptance by legal experts. To further confirm the complex legal application capabilities of current LLMs in legal application scenarios, we also incorporate human evaluation with legal experts. The results indicate that while LLMs may demonstrate strong performance, they still require reinforcement of legal logic.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.05620
Document Type :
Working Paper