Advanced computation of sparse precision matrices for big data

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.

Original languageEnglish
Title of host publicationAdvances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings
PublisherSpringer Verlag
Pages27-38
Number of pages12
ISBN (Print)9783319575285
DOIs
Publication statusPublished - 1 Jan 2017
Event21st Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2017 - Jeju, Korea, Republic of
Duration: 23 May 201726 May 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10235 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other21st Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2017
CountryKorea, Republic of
CityJeju
Period23/5/1726/5/17

Fingerprint

Covariance matrix
Method of multipliers
Alternating Direction Method
Positive Definiteness
Stopping Criterion
Learning systems
Parallel Computers
Efficient Implementation
Positive definite
Dimensionality
Machine Learning
Exceed
Singularity
Engineering
Converge
Iteration
Estimator
Big data
Observation

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Baggag, A., Bensmail, H., & Srivastava, J. (2017). Advanced computation of sparse precision matrices for big data. In Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings (pp. 27-38). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10235 LNAI). Springer Verlag. https://doi.org/10.1007/978-3-319-57529-2_3

Advanced computation of sparse precision matrices for big data. / Baggag, Abdelkader; Bensmail, Halima; Srivastava, Jaideep.

Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings. Springer Verlag, 2017. p. 27-38 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10235 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Baggag, A, Bensmail, H & Srivastava, J 2017, Advanced computation of sparse precision matrices for big data. in Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10235 LNAI, Springer Verlag, pp. 27-38, 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2017, Jeju, Korea, Republic of, 23/5/17. https://doi.org/10.1007/978-3-319-57529-2_3
Baggag A, Bensmail H, Srivastava J. Advanced computation of sparse precision matrices for big data. In Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings. Springer Verlag. 2017. p. 27-38. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-57529-2_3
Baggag, Abdelkader ; Bensmail, Halima ; Srivastava, Jaideep. / Advanced computation of sparse precision matrices for big data. Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings. Springer Verlag, 2017. pp. 27-38 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{69f6c9e8ea0b4ce9b858891965c48a50,
title = "Advanced computation of sparse precision matrices for big data",
abstract = "The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.",
author = "Abdelkader Baggag and Halima Bensmail and Jaideep Srivastava",
year = "2017",
month = "1",
day = "1",
doi = "10.1007/978-3-319-57529-2_3",
language = "English",
isbn = "9783319575285",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "27--38",
booktitle = "Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings",

}

TY - GEN

T1 - Advanced computation of sparse precision matrices for big data

AU - Baggag, Abdelkader

AU - Bensmail, Halima

AU - Srivastava, Jaideep

PY - 2017/1/1

Y1 - 2017/1/1

N2 - The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.

AB - The precision matrix is the inverse of the covariance matrix. Estimating large sparse precision matrices is an interesting and a challenging problem in many fields of sciences, engineering, humanities and machine learning problems in general. Recent applications often encounter high dimensionality with a limited number of data points leading to a number of covariance parameters that greatly exceeds the number of observations, and hence the singularity of the covariance matrix. Several methods have been proposed to deal with this challenging problem, but there is no guarantee that the obtained estimator is positive definite. Furthermore, in many cases, one needs to capture some additional information on the setting of the problem. In this paper, we introduce a criterion that ensures the positive definiteness of the precision matrix and we propose the inner-outer alternating direction method of multipliers as an efficient method for estimating it. We show that the convergence of the algorithm is ensured with a sufficiently relaxed stopping criterion in the inner iteration. We also show that the proposed method converges, is robust, accurate and scalable as it lends itself to an efficient implementation on parallel computers.

UR - http://www.scopus.com/inward/record.url?scp=85018407886&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85018407886&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-57529-2_3

DO - 10.1007/978-3-319-57529-2_3

M3 - Conference contribution

SN - 9783319575285

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 27

EP - 38

BT - Advances in Knowledge Discovery and Data Mining - 21st Pacific-Asia Conference, PAKDD 2017, Proceedings

PB - Springer Verlag

ER -