1. A Novel Psychometrics-Based Approach to Developing Professional Competency Benchmark for Large Language Models
- Author
-
Kardanova, Elena, Ivanova, Alina, Tarasova, Ksenia, Pashchenko, Taras, Tikhoniuk, Aleksei, Yusupova, Elen, Kasprzhak, Anatoly, Kuzminov, Yaroslav, Kruchinskaia, Ekaterina, and Brun, Irina
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
The era of large language models (LLM) raises questions not only about how to train models, but also about how to evaluate them. Despite numerous existing benchmarks, insufficient attention is often given to creating assessments that test LLMs in a valid and reliable manner. To address this challenge, we accommodate the Evidence-centered design (ECD) methodology and propose a comprehensive approach to benchmark development based on rigorous psychometric principles. In this paper, we have made the first attempt to illustrate this approach by creating a new benchmark in the field of pedagogy and education, highlighting the limitations of existing benchmark development approach and taking into account the development of LLMs. We conclude that a new approach to benchmarking is required to match the growing complexity of AI applications in the educational context. We construct a novel benchmark guided by the Bloom's taxonomy and rigorously designed by a consortium of education experts trained in test development. Thus the current benchmark provides an academically robust and practical assessment tool tailored for LLMs, rather than human participants. Tested empirically on the GPT model in the Russian language, it evaluates model performance across varied task complexities, revealing critical gaps in current LLM capabilities. Our results indicate that while generative AI tools hold significant promise for education - potentially supporting tasks such as personalized tutoring, real-time feedback, and multilingual learning - their reliability as autonomous teachers' assistants right now remain rather limited, particularly in tasks requiring deeper cognitive engagement., Comment: 36 pages, 2 figures
- Published
- 2024