Back to Search Start Over

The Automated Verification of Textual Claims (AVeriTeC) Shared Task

Authors :
Schlichtkrull, Michael
Chen, Yulong
Whitehouse, Chenxi
Deng, Zhenyun
Akhtar, Mubashara
Aly, Rami
Guo, Zhijiang
Christodoulopoulos, Christos
Cocarascu, Oana
Mittal, Arpit
Thorne, James
Vlachos, Andreas
Publication Year :
2024

Abstract

The Automated Verification of Textual Claims (AVeriTeC) shared task asks participants to retrieve evidence and predict veracity for real-world claims checked by fact-checkers. Evidence can be found either via a search engine, or via a knowledge store provided by the organisers. Submissions are evaluated using AVeriTeC score, which considers a claim to be accurately verified if and only if both the verdict is correct and retrieved evidence is considered to meet a certain quality threshold. The shared task received 21 submissions, 18 of which surpassed our baseline. The winning team was TUDA_MAI with an AVeriTeC score of 63%. In this paper we describe the shared task, present the full results, and highlight key takeaways from the shared task.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.23850
Document Type :
Working Paper