Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 2

Factuality

Alberto Barron, Tamer Elsayed, Reem Suwaileh, Lluis Marques, Pepa Atanasova, Wajdi Zaghouani, Spas Kyuchukov, Giovanni Martino, Preslav Nakov

Research output: Contribution to journalConference article

2 Citations (Scopus)

Abstract

We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 2: Factuality. The task asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. In terms of data, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and five of them actually submitted runs. The most successful approaches used by the participants relied on the automatic retrieval of evidence from the Web. Similarities and other relationships between the claim and the retrieved documents were used as input to classifiers in order to make a decision. The best-performing official submissions achieved mean absolute error of .705 and .658 for the English and for the Arabic test sets, respectively. This leaves plenty of room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in fact-checking.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2125
Publication statusPublished - 1 Jan 2018
Event19th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2018 - Avignon, France
Duration: 10 Sep 201814 Sep 2018

Fingerprint

Classifiers

Keywords

  • Computational journalism
  • Fact-checking
  • Factuality
  • Veracity

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 2 : Factuality. / Barron, Alberto; Elsayed, Tamer; Suwaileh, Reem; Marques, Lluis; Atanasova, Pepa; Zaghouani, Wajdi; Kyuchukov, Spas; Martino, Giovanni; Nakov, Preslav.

In: CEUR Workshop Proceedings, Vol. 2125, 01.01.2018.

Research output: Contribution to journalConference article

@article{50aee535e7f94024a9f74c527fb57db4,
title = "Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 2: Factuality",
abstract = "We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 2: Factuality. The task asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. In terms of data, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and five of them actually submitted runs. The most successful approaches used by the participants relied on the automatic retrieval of evidence from the Web. Similarities and other relationships between the claim and the retrieved documents were used as input to classifiers in order to make a decision. The best-performing official submissions achieved mean absolute error of .705 and .658 for the English and for the Arabic test sets, respectively. This leaves plenty of room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in fact-checking.",
keywords = "Computational journalism, Fact-checking, Factuality, Veracity",
author = "Alberto Barron and Tamer Elsayed and Reem Suwaileh and Lluis Marques and Pepa Atanasova and Wajdi Zaghouani and Spas Kyuchukov and Giovanni Martino and Preslav Nakov",
year = "2018",
month = "1",
day = "1",
language = "English",
volume = "2125",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 2

T2 - Factuality

AU - Barron, Alberto

AU - Elsayed, Tamer

AU - Suwaileh, Reem

AU - Marques, Lluis

AU - Atanasova, Pepa

AU - Zaghouani, Wajdi

AU - Kyuchukov, Spas

AU - Martino, Giovanni

AU - Nakov, Preslav

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 2: Factuality. The task asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. In terms of data, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and five of them actually submitted runs. The most successful approaches used by the participants relied on the automatic retrieval of evidence from the Web. Similarities and other relationships between the claim and the retrieved documents were used as input to classifiers in order to make a decision. The best-performing official submissions achieved mean absolute error of .705 and .658 for the English and for the Arabic test sets, respectively. This leaves plenty of room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in fact-checking.

AB - We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 2: Factuality. The task asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. In terms of data, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and five of them actually submitted runs. The most successful approaches used by the participants relied on the automatic retrieval of evidence from the Web. Similarities and other relationships between the claim and the retrieved documents were used as input to classifiers in order to make a decision. The best-performing official submissions achieved mean absolute error of .705 and .658 for the English and for the Arabic test sets, respectively. This leaves plenty of room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in fact-checking.

KW - Computational journalism

KW - Fact-checking

KW - Factuality

KW - Veracity

UR - http://www.scopus.com/inward/record.url?scp=85051077213&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051077213&partnerID=8YFLogxK

M3 - Conference article

VL - 2125

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -