Sparse reductions for fixed-size least squares support vector machines on large scale data

RaghvenPhDa Mall, Johan A.K. Suykens

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nyström approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M . In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.

Original languageEnglish
Title of host publicationAdvances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings
Pages161-173
Number of pages13
EditionPART 1
DOIs
Publication statusPublished - 1 Dec 2013
Externally publishedYes
Event17th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2013 - Gold Coast, QLD
Duration: 14 Apr 201317 Apr 2013

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume7818 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other17th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2013
CityGold Coast, QLD
Period14/4/1317/4/13

Fingerprint

Least Squares Support Vector Machine
Support vector machines
Prototype
Sparsity
Regression
Trade-offs
Overdetermined Systems
Formal methods
Formal Methods
Error Estimation
Approximation
Linear equations
Large Data Sets
Repository
Error analysis
Linear equation
Model
Iteration
Norm
Subset

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Mall, R., & Suykens, J. A. K. (2013). Sparse reductions for fixed-size least squares support vector machines on large scale data. In Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings (PART 1 ed., pp. 161-173). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7818 LNAI, No. PART 1). https://doi.org/10.1007/978-3-642-37453-1_14

Sparse reductions for fixed-size least squares support vector machines on large scale data. / Mall, RaghvenPhDa; Suykens, Johan A.K.

Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings. PART 1. ed. 2013. p. 161-173 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7818 LNAI, No. PART 1).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mall, R & Suykens, JAK 2013, Sparse reductions for fixed-size least squares support vector machines on large scale data. in Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings. PART 1 edn, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), no. PART 1, vol. 7818 LNAI, pp. 161-173, 17th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2013, Gold Coast, QLD, 14/4/13. https://doi.org/10.1007/978-3-642-37453-1_14
Mall R, Suykens JAK. Sparse reductions for fixed-size least squares support vector machines on large scale data. In Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings. PART 1 ed. 2013. p. 161-173. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1). https://doi.org/10.1007/978-3-642-37453-1_14
Mall, RaghvenPhDa ; Suykens, Johan A.K. / Sparse reductions for fixed-size least squares support vector machines on large scale data. Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings. PART 1. ed. 2013. pp. 161-173 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1).
@inproceedings{41958bfc05e24003877d7cfc625e1dfa,
title = "Sparse reductions for fixed-size least squares support vector machines on large scale data",
abstract = "Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nystr{\"o}m approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M . In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.",
author = "RaghvenPhDa Mall and Suykens, {Johan A.K.}",
year = "2013",
month = "12",
day = "1",
doi = "10.1007/978-3-642-37453-1_14",
language = "English",
isbn = "9783642374524",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
number = "PART 1",
pages = "161--173",
booktitle = "Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings",
edition = "PART 1",

}

TY - GEN

T1 - Sparse reductions for fixed-size least squares support vector machines on large scale data

AU - Mall, RaghvenPhDa

AU - Suykens, Johan A.K.

PY - 2013/12/1

Y1 - 2013/12/1

N2 - Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nyström approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M . In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.

AB - Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nyström approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M . In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.

UR - http://www.scopus.com/inward/record.url?scp=84893618270&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893618270&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-37453-1_14

DO - 10.1007/978-3-642-37453-1_14

M3 - Conference contribution

SN - 9783642374524

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 161

EP - 173

BT - Advances in Knowledge Discovery and Data Mining - 17th Pacific-Asia Conference, PAKDD 2013, Proceedings

ER -