Detecting toxicity triggers in online discussions

Hind Almerekhi, Bernard J. Jansen, Haewoon Kwak, Joni Salminen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5% of detecting toxicity triggers from diverse Reddit communities.

Original languageEnglish
Title of host publicationHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media
PublisherAssociation for Computing Machinery, Inc
Pages291-292
Number of pages2
ISBN (Electronic)9781450368858
DOIs
Publication statusPublished - 12 Sep 2019
Event30th ACM Conference on Hypertext and Social Media, HT 2019 - Hof, Germany
Duration: 17 Sep 201920 Sep 2019

Publication series

NameHT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media

Conference

Conference30th ACM Conference on Hypertext and Social Media, HT 2019
CountryGermany
CityHof
Period17/9/1920/9/19

Fingerprint

Toxicity
Neural networks

Keywords

  • Neural networks
  • Reddit
  • Social media
  • Toxicity
  • Trigger detection

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Cite this

Almerekhi, H., Jansen, B. J., Kwak, H., & Salminen, J. (2019). Detecting toxicity triggers in online discussions. In HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media (pp. 291-292). (HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media). Association for Computing Machinery, Inc. https://doi.org/10.1145/3342220.3344933

Detecting toxicity triggers in online discussions. / Almerekhi, Hind; Jansen, Bernard J.; Kwak, Haewoon; Salminen, Joni.

HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media. Association for Computing Machinery, Inc, 2019. p. 291-292 (HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Almerekhi, H, Jansen, BJ, Kwak, H & Salminen, J 2019, Detecting toxicity triggers in online discussions. in HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media. HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media, Association for Computing Machinery, Inc, pp. 291-292, 30th ACM Conference on Hypertext and Social Media, HT 2019, Hof, Germany, 17/9/19. https://doi.org/10.1145/3342220.3344933
Almerekhi H, Jansen BJ, Kwak H, Salminen J. Detecting toxicity triggers in online discussions. In HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media. Association for Computing Machinery, Inc. 2019. p. 291-292. (HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media). https://doi.org/10.1145/3342220.3344933
Almerekhi, Hind ; Jansen, Bernard J. ; Kwak, Haewoon ; Salminen, Joni. / Detecting toxicity triggers in online discussions. HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media. Association for Computing Machinery, Inc, 2019. pp. 291-292 (HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media).
@inproceedings{8297e5c770844b1780182b9cc96fe4f2,
title = "Detecting toxicity triggers in online discussions",
abstract = "Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5{\%} of detecting toxicity triggers from diverse Reddit communities.",
keywords = "Neural networks, Reddit, Social media, Toxicity, Trigger detection",
author = "Hind Almerekhi and Jansen, {Bernard J.} and Haewoon Kwak and Joni Salminen",
year = "2019",
month = "9",
day = "12",
doi = "10.1145/3342220.3344933",
language = "English",
series = "HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media",
publisher = "Association for Computing Machinery, Inc",
pages = "291--292",
booktitle = "HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media",

}

TY - GEN

T1 - Detecting toxicity triggers in online discussions

AU - Almerekhi, Hind

AU - Jansen, Bernard J.

AU - Kwak, Haewoon

AU - Salminen, Joni

PY - 2019/9/12

Y1 - 2019/9/12

N2 - Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5% of detecting toxicity triggers from diverse Reddit communities.

AB - Despite the considerable interest in the detection of toxic comments, there has been little research investigating the causes - i.e., triggers - of toxicity. In this work, we first propose a formal definition of triggers of toxicity in online communities. We proceed to build an LSTM neural network model using textual features of comments, and then, based on a comprehensive review of previous literature, we incorporate topical and sentiment shift in interactions as features. Our model achieves an average accuracy of 82.5% of detecting toxicity triggers from diverse Reddit communities.

KW - Neural networks

KW - Reddit

KW - Social media

KW - Toxicity

KW - Trigger detection

UR - http://www.scopus.com/inward/record.url?scp=85073372109&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073372109&partnerID=8YFLogxK

U2 - 10.1145/3342220.3344933

DO - 10.1145/3342220.3344933

M3 - Conference contribution

AN - SCOPUS:85073372109

T3 - HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media

SP - 291

EP - 292

BT - HT 2019 - Proceedings of the 30th ACM Conference on Hypertext and Social Media

PB - Association for Computing Machinery, Inc

ER -