Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness

Pepa Atanasova, Lluis Marques, Alberto Barron, Tamer Elsayed, Reem Suwaileh, Wajdi Zaghouani, Spas Kyuchukov, Giovanni Martino, Presiav Nakov

Research output: Contribution to journalConference article

8 Citations (Scopus)

Abstract

We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact checking. We offered the task in both English and Arabic, based on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign. A total of 30 teams registered to participate in the Lab and seven teams actually submitted systems for Task 1. The most successful approaches used by the participants relied on recurrent and multi-layer neural networks, as well as on combinations of distributional representations, on matchings claims' vocabulary against lexicons, and on measures of syntactic dependency. The best systems achieved mean average precision of 0.18 and 0.15 on the English and on the Arabic test datasets, respectively. This leaves large room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in check-worthiness estimation.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2125
Publication statusPublished - 1 Jan 2018
Event19th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2018 - Avignon, France
Duration: 10 Sep 201814 Sep 2018

Fingerprint

Multilayer neural networks
Syntactics

Keywords

  • Check-worthiness
  • Computational journalism
  • Fact-checking
  • Veracity

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1 : Check-Worthiness. / Atanasova, Pepa; Marques, Lluis; Barron, Alberto; Elsayed, Tamer; Suwaileh, Reem; Zaghouani, Wajdi; Kyuchukov, Spas; Martino, Giovanni; Nakov, Presiav.

In: CEUR Workshop Proceedings, Vol. 2125, 01.01.2018.

Research output: Contribution to journalConference article

@article{bbfcce4f83dd4f5f8d3d768da49f1e87,
title = "Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1: Check-Worthiness",
abstract = "We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact checking. We offered the task in both English and Arabic, based on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign. A total of 30 teams registered to participate in the Lab and seven teams actually submitted systems for Task 1. The most successful approaches used by the participants relied on recurrent and multi-layer neural networks, as well as on combinations of distributional representations, on matchings claims' vocabulary against lexicons, and on measures of syntactic dependency. The best systems achieved mean average precision of 0.18 and 0.15 on the English and on the Arabic test datasets, respectively. This leaves large room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in check-worthiness estimation.",
keywords = "Check-worthiness, Computational journalism, Fact-checking, Veracity",
author = "Pepa Atanasova and Lluis Marques and Alberto Barron and Tamer Elsayed and Reem Suwaileh and Wajdi Zaghouani and Spas Kyuchukov and Giovanni Martino and Presiav Nakov",
year = "2018",
month = "1",
day = "1",
language = "English",
volume = "2125",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. Task 1

T2 - Check-Worthiness

AU - Atanasova, Pepa

AU - Marques, Lluis

AU - Barron, Alberto

AU - Elsayed, Tamer

AU - Suwaileh, Reem

AU - Zaghouani, Wajdi

AU - Kyuchukov, Spas

AU - Martino, Giovanni

AU - Nakov, Presiav

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact checking. We offered the task in both English and Arabic, based on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign. A total of 30 teams registered to participate in the Lab and seven teams actually submitted systems for Task 1. The most successful approaches used by the participants relied on recurrent and multi-layer neural networks, as well as on combinations of distributional representations, on matchings claims' vocabulary against lexicons, and on measures of syntactic dependency. The best systems achieved mean average precision of 0.18 and 0.15 on the English and on the Arabic test datasets, respectively. This leaves large room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in check-worthiness estimation.

AB - We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims, with focus on Task 1: Check-Worthiness. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact checking. We offered the task in both English and Arabic, based on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign. A total of 30 teams registered to participate in the Lab and seven teams actually submitted systems for Task 1. The most successful approaches used by the participants relied on recurrent and multi-layer neural networks, as well as on combinations of distributional representations, on matchings claims' vocabulary against lexicons, and on measures of syntactic dependency. The best systems achieved mean average precision of 0.18 and 0.15 on the English and on the Arabic test datasets, respectively. This leaves large room for further improvement, and thus we release all datasets and the scoring scripts, which should enable further research in check-worthiness estimation.

KW - Check-worthiness

KW - Computational journalism

KW - Fact-checking

KW - Veracity

UR - http://www.scopus.com/inward/record.url?scp=85051071689&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051071689&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85051071689

VL - 2125

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -