Back to Search Start Over

Overview of the DagPap22 Shared Task on Detecting Automatically Generated Scientific Papers

Authors :
Kashnitsky, Yury
Herrmannova, Drahomira
de Waard, Anita
Tsatsaronis, Georgios
Fennell, Catriona
Labbé, Cyril
Labbé, Cyril
Nano bubbles: how, when and why does science fail to correct itself? - NanoBubbles - 951393 - INCOMING
Laboratoire d'Informatique de Grenoble (LIG)
Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)
Systèmes d’Information - inGénierie et Modélisation Adaptables (SIGMA)
Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)
European Project: 951393,NanoBubbles
Source :
Proceedings of the Third Workshop on Scholarly Document Processing, Third Workshop on Scholarly Document Processing, Third Workshop on Scholarly Document Processing, Oct 2022, Gyeongju, South Korea
Publication Year :
2022
Publisher :
HAL CCSD, 2022.

Abstract

International audience; This paper provides an overview of the 2022 COLING Scholarly Document Processing workshop shared task on the detection of automatically generated scientific papers. We frame the detection problem as a binary classification task: given an excerpt of text, label it as either human-written or machine-generated. We shared a dataset containing excerpts from human-written papers as well as artificially generated content and suspicious documents collected by Elsevier publishing and editorial teams. As a test set, the participants were provided with a 5x larger corpus of openly accessible human-written as well as generated papers from the same scientific domains of documents. The shared task saw 180 submissions across 14 participating teams and resulted in two published technical reports. We discuss our findings from the shared task in this overview paper.

Details

Language :
English
Database :
OpenAIRE
Journal :
Proceedings of the Third Workshop on Scholarly Document Processing, Third Workshop on Scholarly Document Processing, Third Workshop on Scholarly Document Processing, Oct 2022, Gyeongju, South Korea
Accession number :
edsair.dedup.wf.001..7714b2898fe4b148eb2b5050f1f55dfa