Feature selection and multi-kernel learning for sparse representation on a manifold

Jim Jing Yan Wang, Halima Bensmail, Xin Gao

Research output: Contribution to journalArticle

48 Citations (Scopus)

Abstract

Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods.

Original languageEnglish
Pages (from-to)9-16
Number of pages8
JournalNeural Networks
Volume51
DOIs
Publication statusPublished - 1 Mar 2014

Fingerprint

Feature extraction
Learning
Glycosylation
Medical imaging
Diagnostic Imaging
Bioinformatics
Glossaries
Computational Biology

Keywords

  • Data representation
  • Feature selection
  • Manifold
  • Multiple kernel learning
  • Sparse coding

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cognitive Neuroscience

Cite this

Feature selection and multi-kernel learning for sparse representation on a manifold. / Wang, Jim Jing Yan; Bensmail, Halima; Gao, Xin.

In: Neural Networks, Vol. 51, 01.03.2014, p. 9-16.

Research output: Contribution to journalArticle

@article{36424c8e88974f98b40088ce1c97877d,
title = "Feature selection and multi-kernel learning for sparse representation on a manifold",
abstract = "Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods.",
keywords = "Data representation, Feature selection, Manifold, Multiple kernel learning, Sparse coding",
author = "Wang, {Jim Jing Yan} and Halima Bensmail and Xin Gao",
year = "2014",
month = "3",
day = "1",
doi = "10.1016/j.neunet.2013.11.009",
language = "English",
volume = "51",
pages = "9--16",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Feature selection and multi-kernel learning for sparse representation on a manifold

AU - Wang, Jim Jing Yan

AU - Bensmail, Halima

AU - Gao, Xin

PY - 2014/3/1

Y1 - 2014/3/1

N2 - Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods.

AB - Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods.

KW - Data representation

KW - Feature selection

KW - Manifold

KW - Multiple kernel learning

KW - Sparse coding

UR - http://www.scopus.com/inward/record.url?scp=84890219380&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84890219380&partnerID=8YFLogxK

U2 - 10.1016/j.neunet.2013.11.009

DO - 10.1016/j.neunet.2013.11.009

M3 - Article

VL - 51

SP - 9

EP - 16

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -