Abstract
In this paper, we show how learning models generated by a recently introduced state-of-the-art kernel for graphs can be optimized from the point of view of memory occupancy. After a brief description of the kernel, we introduce a novel representation of the explicit feature space of the kernel based on an hash function which allows to reduce the amount of memory needed both during the training phase and to represent the final learned model. Subsequently, we study the application of a feature selection strategy based on the F-score to further reduce the number of features in the final model. On two representative datasets involving binary classification of chemical graphs, we show that it is actually possible to sensibly reduce memory occupancy (up to one order of magnitude) for the final model with a moderate loss in classification performance.
Original language | English |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
DOIs | |
Publication status | Published - 2012 |
Externally published | Yes |
Event | 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 - Brisbane, QLD Duration: 10 Jun 2012 → 15 Jun 2012 |
Other
Other | 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 |
---|---|
City | Brisbane, QLD |
Period | 10/6/12 → 15/6/12 |
Fingerprint
ASJC Scopus subject areas
- Software
- Artificial Intelligence
Cite this
A memory efficient graph kernel. / Martino, Giovanni; Navarin, Nicolo; Sperduti, Alessandro.
Proceedings of the International Joint Conference on Neural Networks. 2012. 6252831.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - A memory efficient graph kernel
AU - Martino, Giovanni
AU - Navarin, Nicolo
AU - Sperduti, Alessandro
PY - 2012
Y1 - 2012
N2 - In this paper, we show how learning models generated by a recently introduced state-of-the-art kernel for graphs can be optimized from the point of view of memory occupancy. After a brief description of the kernel, we introduce a novel representation of the explicit feature space of the kernel based on an hash function which allows to reduce the amount of memory needed both during the training phase and to represent the final learned model. Subsequently, we study the application of a feature selection strategy based on the F-score to further reduce the number of features in the final model. On two representative datasets involving binary classification of chemical graphs, we show that it is actually possible to sensibly reduce memory occupancy (up to one order of magnitude) for the final model with a moderate loss in classification performance.
AB - In this paper, we show how learning models generated by a recently introduced state-of-the-art kernel for graphs can be optimized from the point of view of memory occupancy. After a brief description of the kernel, we introduce a novel representation of the explicit feature space of the kernel based on an hash function which allows to reduce the amount of memory needed both during the training phase and to represent the final learned model. Subsequently, we study the application of a feature selection strategy based on the F-score to further reduce the number of features in the final model. On two representative datasets involving binary classification of chemical graphs, we show that it is actually possible to sensibly reduce memory occupancy (up to one order of magnitude) for the final model with a moderate loss in classification performance.
UR - http://www.scopus.com/inward/record.url?scp=84865067982&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84865067982&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2012.6252831
DO - 10.1109/IJCNN.2012.6252831
M3 - Conference contribution
AN - SCOPUS:84865067982
SN - 9781467314909
BT - Proceedings of the International Joint Conference on Neural Networks
ER -