1. CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media
- Author
-
Preslav Nakov, Reem Suwaileh, Maram Hasanain, Alberto Barrón-Cedeño, Giovanni Da San Martino, Fatima Haouari, Tamer Elsayed, Alberto Barron-Cedeno, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, Maram Hasanain, Reem Suwaileh, and Fatima Haouari
- Subjects
FOS: Computer and information sciences ,050101 languages & linguistics ,Computer Science - Machine Learning ,Computer science ,Computer Science - Artificial Intelligence ,68T50 ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,02 engineering and technology ,Article ,Ranking (information retrieval) ,Task (project management) ,Machine Learning (cs.LG) ,Computer Science - Information Retrieval ,Set (abstract data type) ,Web page ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Social media ,fact checking check-worthiness claim identification ,Information retrieval ,Computer Science - Computation and Language ,I.2.7 ,05 social sciences ,Rank (computer programming) ,Clef ,Identification (information) ,Artificial Intelligence (cs.AI) ,020201 artificial intelligence & image processing ,Computation and Language (cs.CL) ,Information Retrieval (cs.IR) - Abstract
We describe the third edition of the CheckThat! Lab, which is part of the 2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four complementary tasks and a related task from previous lab editions, offered in English, Arabic, and Spanish. Task 1 asks to predict which tweets in a Twitter stream are worth fact-checking. Task 2 asks to determine whether a claim posted in a tweet can be verified using a set of previously fact-checked claims. Task 3 asks to retrieve text snippets from a given set of Web pages that would be useful for verifying a target tweet's claim. Task 4 asks to predict the veracity of a target tweet's claim using a set of Web pages and potentially useful snippets in them. Finally, the lab offers a fifth task that asks to predict the check-worthiness of the claims made in English political debates and speeches. CheckThat! features a full evaluation framework. The evaluation is carried out using mean average precision or precision at rank k for ranking tasks, and F1 for classification tasks., Computational journalism, Check-worthiness, Fact-checking, Veracity, CLEF-2020 CheckThat! Lab
- Published
- 2020