Overview of the CLEF-2019 Checkthat! LAB

Automatic identification and verification of claims. Task 1: Check-worthiness

Pepa Atanasova, Preslav Nakov, Georgi Karadzhov, Mitra Mohtarami, Giovanni Martino

Research output: Contribution to journalConference article

4 Citations (Scopus)

Abstract

We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2380
Publication statusPublished - 1 Jan 2019
Event20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland
Duration: 9 Sep 201912 Sep 2019

Fingerprint

Logistics
Neural networks

Keywords

  • Check-worthiness estimation
  • Computational journalism
  • Fact-checking
  • Veracity

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Overview of the CLEF-2019 Checkthat! LAB : Automatic identification and verification of claims. Task 1: Check-worthiness. / Atanasova, Pepa; Nakov, Preslav; Karadzhov, Georgi; Mohtarami, Mitra; Martino, Giovanni.

In: CEUR Workshop Proceedings, Vol. 2380, 01.01.2019.

Research output: Contribution to journalConference article

@article{4d32a65c59cd49aea6c06f324772314a,
title = "Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 1: Check-worthiness",
abstract = "We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.",
keywords = "Check-worthiness estimation, Computational journalism, Fact-checking, Veracity",
author = "Pepa Atanasova and Preslav Nakov and Georgi Karadzhov and Mitra Mohtarami and Giovanni Martino",
year = "2019",
month = "1",
day = "1",
language = "English",
volume = "2380",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - Overview of the CLEF-2019 Checkthat! LAB

T2 - Automatic identification and verification of claims. Task 1: Check-worthiness

AU - Atanasova, Pepa

AU - Nakov, Preslav

AU - Karadzhov, Georgi

AU - Mohtarami, Mitra

AU - Martino, Giovanni

PY - 2019/1/1

Y1 - 2019/1/1

N2 - We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

AB - We present an overview of the 2nd edition of the CheckThat! Lab, part of CLEF 2019, with focus on Task 1: Check-Worthiness in political debates. The task asks to predict which claims in a political debate should be prioritized for fact-checking. In particular, given a debate or a political speech, the goal is to produce a ranked list of its sentences based on their worthiness for fact-checking. This year, we extended the 2018 dataset with 16 more debates and speeches. A total of 47 teams registered to participate in the lab, and eleven of them actually submitted runs for Task 1 (compared to seven last year). The evaluation results show that the most successful approaches to Task 1 used various neural networks and logistic regression. The best system achieved mean average precision of 0.166 (0.250 on the speeches, and 0.054 on the debates). This leaves large room for improvement, and thus we release all datasets and scoring scripts, which should enable further research in check-worthiness estimation.

KW - Check-worthiness estimation

KW - Computational journalism

KW - Fact-checking

KW - Veracity

UR - http://www.scopus.com/inward/record.url?scp=85070508754&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070508754&partnerID=8YFLogxK

M3 - Conference article

VL - 2380

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -