Rumour verification through recurring information and an inner-attention mechanism

Ahmet Aker, Alfred Sliwa, Fahim Dalvi, Kalina Bontcheva

Research output: Contribution to journalArticle

Abstract

Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.

Original languageEnglish
Article number100045
JournalOnline Social Networks and Media
Volume13
DOIs
Publication statusPublished - 1 Sep 2019

Fingerprint

rumor
Neural networks
social media
neural network
news
engineering
event
evaluation
performance

Keywords

  • Inner Attention Model
  • Recurring Terms in Rumours
  • Rumour Verification

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Communication

Cite this

Rumour verification through recurring information and an inner-attention mechanism. / Aker, Ahmet; Sliwa, Alfred; Dalvi, Fahim; Bontcheva, Kalina.

In: Online Social Networks and Media, Vol. 13, 100045, 01.09.2019.

Research output: Contribution to journalArticle

@article{f490c99a7db04c999e93da74c7ff84db,
title = "Rumour verification through recurring information and an inner-attention mechanism",
abstract = "Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3{\%} accuracy and 6{\%} F-1 leading to 60.7{\%} accuracy and 61.6{\%} F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.",
keywords = "Inner Attention Model, Recurring Terms in Rumours, Rumour Verification",
author = "Ahmet Aker and Alfred Sliwa and Fahim Dalvi and Kalina Bontcheva",
year = "2019",
month = "9",
day = "1",
doi = "10.1016/j.osnem.2019.07.001",
language = "English",
volume = "13",
journal = "Online Social Networks and Media",
issn = "2468-6964",
publisher = "Elsevier BV",

}

TY - JOUR

T1 - Rumour verification through recurring information and an inner-attention mechanism

AU - Aker, Ahmet

AU - Sliwa, Alfred

AU - Dalvi, Fahim

AU - Bontcheva, Kalina

PY - 2019/9/1

Y1 - 2019/9/1

N2 - Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.

AB - Verification of online rumours is becoming an increasingly important task with the prevalence of event discussions on social media platforms. This paper proposes an inner-attention-based neural network model that uses frequent, recurring terms from past rumours to classify a newly emerging rumour as true, false or unverified. Unlike other methods proposed in related work, our model uses the source rumour alone without any additional information, such as user replies to the rumour or additional feature engineering. Our method outperforms the current state-of-the-art methods on benchmark datasets (RumourEval2017) by 3% accuracy and 6% F-1 leading to 60.7% accuracy and 61.6% F-1. We also compare our attention-based method to two similar models which however do not make use of recurrent terms. The attention-based method guided by frequent recurring terms outperforms this baseline on the same dataset, indicating that the recurring terms injected by the attention mechanism have high positive impact on distinguishing between true and false rumours. Furthermore, we perform out-of-domain evaluations and show that our model is indeed highly competitive compared to the baselines on a newly released RumourEval2019 dataset and also achieves the best performance on classifying fake and legitimate news headlines.

KW - Inner Attention Model

KW - Recurring Terms in Rumours

KW - Rumour Verification

UR - http://www.scopus.com/inward/record.url?scp=85071501184&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071501184&partnerID=8YFLogxK

U2 - 10.1016/j.osnem.2019.07.001

DO - 10.1016/j.osnem.2019.07.001

M3 - Article

VL - 13

JO - Online Social Networks and Media

JF - Online Social Networks and Media

SN - 2468-6964

M1 - 100045

ER -