Overview of the CLEF-2019 Checkthat! LAB: Automatic identification and verification of claims. Task 2: Evidence and factuality

Maram Hasanain, Reem Suwaileh, Tamer Elsayed, Alberto Barron, Preslav Nakov

Research output: Contribution to journalConference article

5 Citations (Scopus)

Abstract

We present an overview of Task 2 of the second edition of the CheckThat! Lab at CLEF 2019. Task 2 asked (A) to rank a given set of Web pages with respect to a check-worthy claim based on their usefulness for fact-checking that claim, (B) to classify these same Web pages according to their degree of usefulness for fact-checking the target claim, (C) to identify useful passages from these pages, and (D) to use the useful pages to predict the claim's factuality. Task 2 at CheckThat! provided a full evaluation framework, consisting of data in Arabic (gathered and annotated from scratch) and evaluation based on normalized discounted cumulative gain (nDCG) for ranking, and F1 for classification. Four teams submitted runs. The most successful approach to subtask A used learning-to-rank, while different classifiers were used in the other subtasks. We release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important task of evidence-based automatic claim verification.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume2380
Publication statusPublished - 1 Jan 2019
Event20th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2019 - Lugano, Switzerland
Duration: 9 Sep 201912 Sep 2019

    Fingerprint

Keywords

  • Computational Journalism
  • Evidence-based Verification
  • Fact-Checking
  • Fake News Detection
  • Veracity

ASJC Scopus subject areas

  • Computer Science(all)

Cite this