MP-boost

A multiple-pivot boosting algorithm and its application to text categorization

Andrea Esuli, Tiziano Fagni, Fabrizio Sebastiani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)

Abstract

ADABOOST.MH is a popular supervised learning algorithm for building multi-label (aka n-of-m) text classifiers. ADABOOST.MH belongs to the family of "boosting" algorithms, and works by iteratively building a committee of "decision stump" classifiers, where each such classifier is trained to especially concentrate on the document-class pairs that previously generated classifiers have found harder to correctly classify. Each decision stump hinges on a specific "pivot term", checking its presence or absence in the test document in order to take its classification decision. In this paper we propose an improved version of AD-ABOOST.MH, called MP-BOOST, obtained by selecting, at each iteration of the boosting process, not one but several pivot terms, one for each category. The rationale behind this choice is that this provides highly individualized treatment for each category, since each iteration thus generates, for each category, the best possible decision stump. We present the results of experiments showing that MP-BOOST is much more effective than ADABOOST.MH. In particular, the improvement in effectiveness is spectacular when few boosting iterations are performed, and (only) high for many such iterations. The improvement is especially significant in the case of macroaveraged effectiveness, which shows that MP-BOOST is especially good at working with hard, infrequent categories.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages1-12
Number of pages12
Volume4209 LNCS
Publication statusPublished - 2006
Externally publishedYes
Event13th International Conference on String Processing and Information Retrieval, SPIRE 2006 - Glasgow
Duration: 11 Oct 200613 Oct 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4209 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other13th International Conference on String Processing and Information Retrieval, SPIRE 2006
CityGlasgow
Period11/10/0613/10/06

Fingerprint

Text Categorization
Pivot
Boosting
Classifiers
Classifier
Iteration
Learning
Supervised learning
Supervised Learning
Hinges
Term
Learning algorithms
Labels
Learning Algorithm
Classify
Experiment
Experiments

ASJC Scopus subject areas

  • Computer Science(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Theoretical Computer Science

Cite this

Esuli, A., Fagni, T., & Sebastiani, F. (2006). MP-boost: A multiple-pivot boosting algorithm and its application to text categorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4209 LNCS, pp. 1-12). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4209 LNCS).

MP-boost : A multiple-pivot boosting algorithm and its application to text categorization. / Esuli, Andrea; Fagni, Tiziano; Sebastiani, Fabrizio.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4209 LNCS 2006. p. 1-12 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4209 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Esuli, A, Fagni, T & Sebastiani, F 2006, MP-boost: A multiple-pivot boosting algorithm and its application to text categorization. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 4209 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4209 LNCS, pp. 1-12, 13th International Conference on String Processing and Information Retrieval, SPIRE 2006, Glasgow, 11/10/06.
Esuli A, Fagni T, Sebastiani F. MP-boost: A multiple-pivot boosting algorithm and its application to text categorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4209 LNCS. 2006. p. 1-12. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Esuli, Andrea ; Fagni, Tiziano ; Sebastiani, Fabrizio. / MP-boost : A multiple-pivot boosting algorithm and its application to text categorization. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4209 LNCS 2006. pp. 1-12 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{faffde57703a45a1bef2e1a47cccdc25,
title = "MP-boost: A multiple-pivot boosting algorithm and its application to text categorization",
abstract = "ADABOOST.MH is a popular supervised learning algorithm for building multi-label (aka n-of-m) text classifiers. ADABOOST.MH belongs to the family of {"}boosting{"} algorithms, and works by iteratively building a committee of {"}decision stump{"} classifiers, where each such classifier is trained to especially concentrate on the document-class pairs that previously generated classifiers have found harder to correctly classify. Each decision stump hinges on a specific {"}pivot term{"}, checking its presence or absence in the test document in order to take its classification decision. In this paper we propose an improved version of AD-ABOOST.MH, called MP-BOOST, obtained by selecting, at each iteration of the boosting process, not one but several pivot terms, one for each category. The rationale behind this choice is that this provides highly individualized treatment for each category, since each iteration thus generates, for each category, the best possible decision stump. We present the results of experiments showing that MP-BOOST is much more effective than ADABOOST.MH. In particular, the improvement in effectiveness is spectacular when few boosting iterations are performed, and (only) high for many such iterations. The improvement is especially significant in the case of macroaveraged effectiveness, which shows that MP-BOOST is especially good at working with hard, infrequent categories.",
author = "Andrea Esuli and Tiziano Fagni and Fabrizio Sebastiani",
year = "2006",
language = "English",
isbn = "3540457747",
volume = "4209 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "1--12",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - MP-boost

T2 - A multiple-pivot boosting algorithm and its application to text categorization

AU - Esuli, Andrea

AU - Fagni, Tiziano

AU - Sebastiani, Fabrizio

PY - 2006

Y1 - 2006

N2 - ADABOOST.MH is a popular supervised learning algorithm for building multi-label (aka n-of-m) text classifiers. ADABOOST.MH belongs to the family of "boosting" algorithms, and works by iteratively building a committee of "decision stump" classifiers, where each such classifier is trained to especially concentrate on the document-class pairs that previously generated classifiers have found harder to correctly classify. Each decision stump hinges on a specific "pivot term", checking its presence or absence in the test document in order to take its classification decision. In this paper we propose an improved version of AD-ABOOST.MH, called MP-BOOST, obtained by selecting, at each iteration of the boosting process, not one but several pivot terms, one for each category. The rationale behind this choice is that this provides highly individualized treatment for each category, since each iteration thus generates, for each category, the best possible decision stump. We present the results of experiments showing that MP-BOOST is much more effective than ADABOOST.MH. In particular, the improvement in effectiveness is spectacular when few boosting iterations are performed, and (only) high for many such iterations. The improvement is especially significant in the case of macroaveraged effectiveness, which shows that MP-BOOST is especially good at working with hard, infrequent categories.

AB - ADABOOST.MH is a popular supervised learning algorithm for building multi-label (aka n-of-m) text classifiers. ADABOOST.MH belongs to the family of "boosting" algorithms, and works by iteratively building a committee of "decision stump" classifiers, where each such classifier is trained to especially concentrate on the document-class pairs that previously generated classifiers have found harder to correctly classify. Each decision stump hinges on a specific "pivot term", checking its presence or absence in the test document in order to take its classification decision. In this paper we propose an improved version of AD-ABOOST.MH, called MP-BOOST, obtained by selecting, at each iteration of the boosting process, not one but several pivot terms, one for each category. The rationale behind this choice is that this provides highly individualized treatment for each category, since each iteration thus generates, for each category, the best possible decision stump. We present the results of experiments showing that MP-BOOST is much more effective than ADABOOST.MH. In particular, the improvement in effectiveness is spectacular when few boosting iterations are performed, and (only) high for many such iterations. The improvement is especially significant in the case of macroaveraged effectiveness, which shows that MP-BOOST is especially good at working with hard, infrequent categories.

UR - http://www.scopus.com/inward/record.url?scp=33750291885&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33750291885&partnerID=8YFLogxK

M3 - Conference contribution

SN - 3540457747

SN - 9783540457749

VL - 4209 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 1

EP - 12

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -