Domain transfer nonnegative matrix factorization

Jim Jing Yan Wang, Yijun Sun, Halima Bensmail

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3605-3612
Number of pages8
ISBN (Print)9781479914845
DOIs
Publication statusPublished - 1 Jan 2014
Event2014 International Joint Conference on Neural Networks, IJCNN 2014 - Beijing, China
Duration: 6 Jul 201411 Jul 2014

Other

Other2014 International Joint Conference on Neural Networks, IJCNN 2014
CountryChina
CityBeijing
Period6/7/1411/7/14

Fingerprint

Factorization
Classifiers
Vector spaces
Labels
Learning algorithms
Experiments

Cite this

Wang, J. J. Y., Sun, Y., & Bensmail, H. (2014). Domain transfer nonnegative matrix factorization. In Proceedings of the International Joint Conference on Neural Networks (pp. 3605-3612). [6889428] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN.2014.6889428

Domain transfer nonnegative matrix factorization. / Wang, Jim Jing Yan; Sun, Yijun; Bensmail, Halima.

Proceedings of the International Joint Conference on Neural Networks. Institute of Electrical and Electronics Engineers Inc., 2014. p. 3605-3612 6889428.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, JJY, Sun, Y & Bensmail, H 2014, Domain transfer nonnegative matrix factorization. in Proceedings of the International Joint Conference on Neural Networks., 6889428, Institute of Electrical and Electronics Engineers Inc., pp. 3605-3612, 2014 International Joint Conference on Neural Networks, IJCNN 2014, Beijing, China, 6/7/14. https://doi.org/10.1109/IJCNN.2014.6889428
Wang JJY, Sun Y, Bensmail H. Domain transfer nonnegative matrix factorization. In Proceedings of the International Joint Conference on Neural Networks. Institute of Electrical and Electronics Engineers Inc. 2014. p. 3605-3612. 6889428 https://doi.org/10.1109/IJCNN.2014.6889428
Wang, Jim Jing Yan ; Sun, Yijun ; Bensmail, Halima. / Domain transfer nonnegative matrix factorization. Proceedings of the International Joint Conference on Neural Networks. Institute of Electrical and Electronics Engineers Inc., 2014. pp. 3605-3612
@inproceedings{43907dc0b3a04378b38ff45575bdac3b,
title = "Domain transfer nonnegative matrix factorization",
abstract = "Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.",
author = "Wang, {Jim Jing Yan} and Yijun Sun and Halima Bensmail",
year = "2014",
month = "1",
day = "1",
doi = "10.1109/IJCNN.2014.6889428",
language = "English",
isbn = "9781479914845",
pages = "3605--3612",
booktitle = "Proceedings of the International Joint Conference on Neural Networks",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Domain transfer nonnegative matrix factorization

AU - Wang, Jim Jing Yan

AU - Sun, Yijun

AU - Bensmail, Halima

PY - 2014/1/1

Y1 - 2014/1/1

N2 - Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.

AB - Domain transfer learning aims to learn an effective classifier for a target domain, where only a few labeled samples are available, with the help of many labeled samples from a source domain. The source and target domain samples usually share the same features and class label space, but have significantly different In these experiments error of the classifier distributions. Nonnegative Matrix Factorization (NMF) has been studied and applied widely as a powerful data representation method. However, NMF is limited to single domain learning problem. It can not be directly used in domain transfer learning problem due to the significant differences between the distributions of the source and target domains. In this paper, we extend the NMF method to domain transfer learning problem. The Maximum Mean Discrepancy (MMD) criteria is employed to reduce the mismatch of source and target domain distributions in the coding vector space. Moreover, we also learn a classifier in the coding vector space to directly utilize the class labels from both the two domains. We construct an unified objective function for the learning of both NMF parameters and classifier parameters, which is optimized alternately in an iterative algorithm. The proposed algorithm is evaluated on two challenging domain transfer tasks, and the encouraging experimental results show its advantage over state-of-the-art domain transfer learning algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84908466104&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84908466104&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2014.6889428

DO - 10.1109/IJCNN.2014.6889428

M3 - Conference contribution

SN - 9781479914845

SP - 3605

EP - 3612

BT - Proceedings of the International Joint Conference on Neural Networks

PB - Institute of Electrical and Electronics Engineers Inc.

ER -