Very sparse LSSVM reductions for large-scale data

RaghvenPhDa Mall, Johan A.K. Suykens

Research output: Contribution to journalArticle

35 Citations (Scopus)

Abstract

Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of $L-{0}$ -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

Original languageEnglish
Article number7052376
Pages (from-to)1086-1097
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume26
Issue number5
DOIs
Publication statusPublished - 1 May 2015
Externally publishedYes

Fingerprint

Support vector machines
Data storage equipment
Linear equations
Costs
Experiments

Keywords

  • L-norm
  • least squares support vector machine (LSSVM) classification and regression
  • reduced models
  • sparsity

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

Very sparse LSSVM reductions for large-scale data. / Mall, RaghvenPhDa; Suykens, Johan A.K.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, No. 5, 7052376, 01.05.2015, p. 1086-1097.

Research output: Contribution to journalArticle

@article{807251831de244d4985147a75f7f91ee,
title = "Very sparse LSSVM reductions for large-scale data",
abstract = "Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nystr{\"o}m approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of $L-{0}$ -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.",
keywords = "L-norm, least squares support vector machine (LSSVM) classification and regression, reduced models, sparsity",
author = "RaghvenPhDa Mall and Suykens, {Johan A.K.}",
year = "2015",
month = "5",
day = "1",
doi = "10.1109/TNNLS.2014.2333879",
language = "English",
volume = "26",
pages = "1086--1097",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "5",

}

TY - JOUR

T1 - Very sparse LSSVM reductions for large-scale data

AU - Mall, RaghvenPhDa

AU - Suykens, Johan A.K.

PY - 2015/5/1

Y1 - 2015/5/1

N2 - Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of $L-{0}$ -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

AB - Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of $L-{0}$ -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

KW - L-norm

KW - least squares support vector machine (LSSVM) classification and regression

KW - reduced models

KW - sparsity

UR - http://www.scopus.com/inward/record.url?scp=85027954386&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85027954386&partnerID=8YFLogxK

U2 - 10.1109/TNNLS.2014.2333879

DO - 10.1109/TNNLS.2014.2333879

M3 - Article

AN - SCOPUS:85027954386

VL - 26

SP - 1086

EP - 1097

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 5

M1 - 7052376

ER -