Overview of the CLEF-2019 Checkthat! LAB

Automatic identification and verification of claims. Task 2: Evidence and factuality

Maram Hasanain, Reem Suwaileh, Tamer Elsayed, Alberto Barron, Preslav Nakov

Research output: Contribution to journalConference article

2 Citations (Scopus)

Abstract

We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2380
Publication statusPublished - 1 Jan 2019
Event20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland
Duration: 9 Sep 201912 Sep 2019

Fingerprint

Websites
Classifiers

Keywords

  • Computational Journalism
  • Evidence-based Verification
  • Fact-Checking
  • Fake News Detection
  • Veracity

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Overview of the CLEF-2019 Checkthat! LAB : Automatic identification and verification of claims. Task 2: Evidence and factuality. / Hasanain, Maram; Suwaileh, Reem; Elsayed, Tamer; Barron, Alberto; Nakov, Preslav.

In: CEUR Workshop Proceedings, Vol. 2380, 01.01.2019.

Research output: Contribution to journalConference article

@article{170d1fdf23e54139ad5abc41f2b5708f,
title = "Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 2: Evidence and factuality",
abstract = "We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification.",
keywords = "Computational Journalism, Evidence-based Verification, Fact-Checking, Fake News Detection, Veracity",
author = "Maram Hasanain and Reem Suwaileh and Tamer Elsayed and Alberto Barron and Preslav Nakov",
year = "2019",
month = "1",
day = "1",
language = "English",
volume = "2380",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - Overview of the CLEF-2019 Checkthat! LAB

T2 - Automatic identification and verification of claims. Task 2: Evidence and factuality

AU - Hasanain, Maram

AU - Suwaileh, Reem

AU - Elsayed, Tamer

AU - Barron, Alberto

AU - Nakov, Preslav

PY - 2019/1/1

Y1 - 2019/1/1

N2 - We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification.

AB - We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification.

KW - Computational Journalism

KW - Evidence-based Verification

KW - Fact-Checking

KW - Fake News Detection

KW - Veracity

UR - http://www.scopus.com/inward/record.url?scp=85070501566&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85070501566&partnerID=8YFLogxK

M3 - Conference article

VL - 2380

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -