Back to Search Start Over

Long Input Benchmark for Russian Analysis

Authors :
Churin, Igor
Apishev, Murat
Tikhonova, Maria
Shevelev, Denis
Bulatov, Aydar
Kuratov, Yuri
Averkiev, Sergej
Fenogenova, Alena
Publication Year :
2024

Abstract

Recent advancements in Natural Language Processing (NLP) have fostered the development of Large Language Models (LLMs) that can solve an immense variety of tasks. One of the key aspects of their application is their ability to work with long text documents and to process long sequences of tokens. This has created a demand for proper evaluation of long-context understanding. To address this need for the Russian language, we propose LIBRA (Long Input Benchmark for Russian Analysis), which comprises 21 adapted datasets to study the LLM's abilities to understand long texts thoroughly. The tests are divided into four complexity groups and allow the evaluation of models across various context lengths ranging from 4k up to 128k tokens. We provide the open-source datasets, codebase, and public leaderboard for LIBRA to guide forthcoming research.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.02439
Document Type :
Working Paper