Back to Search Start Over

CLUE: A Chinese Language Understanding Evaluation Benchmark

Authors :
Xu, Liang
Hu, Hai
Zhang, Xuanwei
Li, Lu
Cao, Chenjie
Li, Yudong
Xu, Yechen
Sun, Kai
Yu, Dian
Yu, Cong
Tian, Yin
Dong, Qianqian
Liu, Weitang
Shi, Bo
Cui, Yiming
Li, Junyi
Zeng, Jun
Wang, Rongzhao
Xie, Weijian
Li, Yanting
Patterson, Yina
Tian, Zuoyu
Zhang, Yiwen
Zhou, He
Liu, Shaoweihua
Zhao, Zhe
Zhao, Qipeng
Yue, Cong
Zhang, Xinrui
Yang, Zhengliang
Richardson, Kyle
Lan, Zhenzhong
Publication Year :
2020

Abstract

The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks. These comprehensive benchmarks have facilitated a broad range of research and applications in natural language processing (NLP). The problem, however, is that most such benchmarks are limited to English, which has made it difficult to replicate many of the successes in English NLU for other languages. To help remedy this issue, we introduce the first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text. To establish results on these tasks, we report scores using an exhaustive set of current state-of-the-art pre-trained Chinese models (9 in total). We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on Chinese NLU. Our benchmark is released at https://www.CLUEbenchmarks.com<br />Comment: Accepted by COLING2020; 10 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2004.05986
Document Type :
Working Paper