Improving text classification accuracy by training label cleaning

Andrea Esuli, Fabrizio Sebastiani

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

In text classification (TC) and other tasks involving supervised learning, labelled data may be scarce or expensive to obtain. Semisupervised learning and active learning are two strategies whose aim is maximizing the effectiveness of the resulting classifiers for a given amount of training effort. Both strategies have been actively investigated for TC in recent years. Much less research has been devoted to a third such strategy, training label cleaning (TLC), which consists in devising ranking functions that sort the original training examples in terms of how likely it is that the human annotator has mislabelled them. This provides a convenient means for the human annotator to revise the training set so as to improve its quality. Working in the context of boosting-based learning methods for multilabel classification we present three different techniques for performing TLC and, on three widely used TC benchmarks, evaluate them by their capability of spotting training documents that, for experimental reasons only, we have purposefully mislabelled. We also evaluate the degradation in classification effectiveness that these mislabelled texts bring about, and to what extent training label cleaning can prevent this degradation.

Original languageEnglish
Article number19
JournalACM Transactions on Information Systems
Volume31
Issue number4
DOIs
Publication statusPublished - Nov 2013
Externally publishedYes

Fingerprint

Labels
Cleaning
Degradation
Supervised learning
Classifiers
Text classification

Keywords

  • Supervised learning
  • Synthetic noise
  • Text classification
  • Training label cleaning
  • Training label noise

ASJC Scopus subject areas

  • Information Systems
  • Business, Management and Accounting(all)
  • Computer Science Applications

Cite this

Improving text classification accuracy by training label cleaning. / Esuli, Andrea; Sebastiani, Fabrizio.

In: ACM Transactions on Information Systems, Vol. 31, No. 4, 19, 11.2013.

Research output: Contribution to journalArticle

Esuli, Andrea ; Sebastiani, Fabrizio. / Improving text classification accuracy by training label cleaning. In: ACM Transactions on Information Systems. 2013 ; Vol. 31, No. 4.
@article{28a5266f2239475aa17449ca17a292ba,
title = "Improving text classification accuracy by training label cleaning",
abstract = "In text classification (TC) and other tasks involving supervised learning, labelled data may be scarce or expensive to obtain. Semisupervised learning and active learning are two strategies whose aim is maximizing the effectiveness of the resulting classifiers for a given amount of training effort. Both strategies have been actively investigated for TC in recent years. Much less research has been devoted to a third such strategy, training label cleaning (TLC), which consists in devising ranking functions that sort the original training examples in terms of how likely it is that the human annotator has mislabelled them. This provides a convenient means for the human annotator to revise the training set so as to improve its quality. Working in the context of boosting-based learning methods for multilabel classification we present three different techniques for performing TLC and, on three widely used TC benchmarks, evaluate them by their capability of spotting training documents that, for experimental reasons only, we have purposefully mislabelled. We also evaluate the degradation in classification effectiveness that these mislabelled texts bring about, and to what extent training label cleaning can prevent this degradation.",
keywords = "Supervised learning, Synthetic noise, Text classification, Training label cleaning, Training label noise",
author = "Andrea Esuli and Fabrizio Sebastiani",
year = "2013",
month = "11",
doi = "10.1145/2516889",
language = "English",
volume = "31",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
publisher = "Association for Computing Machinery (ACM)",
number = "4",

}

TY - JOUR

T1 - Improving text classification accuracy by training label cleaning

AU - Esuli, Andrea

AU - Sebastiani, Fabrizio

PY - 2013/11

Y1 - 2013/11

N2 - In text classification (TC) and other tasks involving supervised learning, labelled data may be scarce or expensive to obtain. Semisupervised learning and active learning are two strategies whose aim is maximizing the effectiveness of the resulting classifiers for a given amount of training effort. Both strategies have been actively investigated for TC in recent years. Much less research has been devoted to a third such strategy, training label cleaning (TLC), which consists in devising ranking functions that sort the original training examples in terms of how likely it is that the human annotator has mislabelled them. This provides a convenient means for the human annotator to revise the training set so as to improve its quality. Working in the context of boosting-based learning methods for multilabel classification we present three different techniques for performing TLC and, on three widely used TC benchmarks, evaluate them by their capability of spotting training documents that, for experimental reasons only, we have purposefully mislabelled. We also evaluate the degradation in classification effectiveness that these mislabelled texts bring about, and to what extent training label cleaning can prevent this degradation.

AB - In text classification (TC) and other tasks involving supervised learning, labelled data may be scarce or expensive to obtain. Semisupervised learning and active learning are two strategies whose aim is maximizing the effectiveness of the resulting classifiers for a given amount of training effort. Both strategies have been actively investigated for TC in recent years. Much less research has been devoted to a third such strategy, training label cleaning (TLC), which consists in devising ranking functions that sort the original training examples in terms of how likely it is that the human annotator has mislabelled them. This provides a convenient means for the human annotator to revise the training set so as to improve its quality. Working in the context of boosting-based learning methods for multilabel classification we present three different techniques for performing TLC and, on three widely used TC benchmarks, evaluate them by their capability of spotting training documents that, for experimental reasons only, we have purposefully mislabelled. We also evaluate the degradation in classification effectiveness that these mislabelled texts bring about, and to what extent training label cleaning can prevent this degradation.

KW - Supervised learning

KW - Synthetic noise

KW - Text classification

KW - Training label cleaning

KW - Training label noise

UR - http://www.scopus.com/inward/record.url?scp=84890369394&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84890369394&partnerID=8YFLogxK

U2 - 10.1145/2516889

DO - 10.1145/2516889

M3 - Article

VL - 31

JO - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 4

M1 - 19

ER -