Very sparse LSSVM reductions for large-scale data

RaghvenPhDa Mall, Johan A.K. Suykens

Research output: Contribution to journalArticle

36 Citations (Scopus)

Abstract

Least squares support vector machines (LSSVMs) have been widely applied for classification and regression with comparable performance with SVMs. The LSSVM model lacks sparsity and is unable to handle large-scale data due to computational and memory constraints. A primal fixed-size LSSVM (PFS-LSSVM) introduce sparsity using Nyström approximation with a set of prototype vectors (PVs). The PFS-LSSVM model solves an overdetermined system of linear equations in the primal. However, this solution is not the sparsest. We investigate the sparsity-error tradeoff by introducing a second level of sparsity. This is done by means of $L-{0}$ -norm-based reductions by iteratively sparsifying LSSVM and PFS-LSSVM models. The exact choice of the cardinality for the initial PV set is not important then as the final model is highly sparse. The proposed method overcomes the problem of memory constraints and high computational costs resulting in highly sparse reductions to LSSVM models. The approximations of the two models allow to scale the models to large-scale datasets. Experiments on real-world classification and regression data sets from the UCI repository illustrate that these approaches achieve sparse models without a significant tradeoff in errors.

Original languageEnglish
Article number7052376
Pages (from-to)1086-1097
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume26
Issue number5
DOIs
Publication statusPublished - 1 May 2015
Externally publishedYes

    Fingerprint

Keywords

  • L-norm
  • least squares support vector machine (LSSVM) classification and regression
  • reduced models
  • sparsity

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this