VLSI implementation of a neural network classifier based on the saturating linear activation function

Amine Bermak, A. Bouzerdoum

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.

Original languageEnglish
Title of host publicationICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages981-985
Number of pages5
Volume2
ISBN (Electronic)9810475241, 9789810475246
DOIs
Publication statusPublished - 2002
Externally publishedYes
Event9th International Conference on Neural Information Processing, ICONIP 2002 - Singapore, Singapore
Duration: 18 Nov 200222 Nov 2002

Other

Other9th International Conference on Neural Information Processing, ICONIP 2002
CountrySingapore
CitySingapore
Period18/11/0222/11/02

Fingerprint

Classifiers
Chemical activation
Neural networks
Feedforward neural networks
Hardware
Networks (circuits)

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Cite this

Bermak, A., & Bouzerdoum, A. (2002). VLSI implementation of a neural network classifier based on the saturating linear activation function. In ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age (Vol. 2, pp. 981-985). [1198207] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICONIP.2002.1198207

VLSI implementation of a neural network classifier based on the saturating linear activation function. / Bermak, Amine; Bouzerdoum, A.

ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 2 Institute of Electrical and Electronics Engineers Inc., 2002. p. 981-985 1198207.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bermak, A & Bouzerdoum, A 2002, VLSI implementation of a neural network classifier based on the saturating linear activation function. in ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. vol. 2, 1198207, Institute of Electrical and Electronics Engineers Inc., pp. 981-985, 9th International Conference on Neural Information Processing, ICONIP 2002, Singapore, Singapore, 18/11/02. https://doi.org/10.1109/ICONIP.2002.1198207
Bermak A, Bouzerdoum A. VLSI implementation of a neural network classifier based on the saturating linear activation function. In ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 2. Institute of Electrical and Electronics Engineers Inc. 2002. p. 981-985. 1198207 https://doi.org/10.1109/ICONIP.2002.1198207
Bermak, Amine ; Bouzerdoum, A. / VLSI implementation of a neural network classifier based on the saturating linear activation function. ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age. Vol. 2 Institute of Electrical and Electronics Engineers Inc., 2002. pp. 981-985
@inproceedings{87acd49b6d1d4facbcd4b75e912c71e7,
title = "VLSI implementation of a neural network classifier based on the saturating linear activation function",
abstract = "This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.",
author = "Amine Bermak and A. Bouzerdoum",
year = "2002",
doi = "10.1109/ICONIP.2002.1198207",
language = "English",
volume = "2",
pages = "981--985",
booktitle = "ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - VLSI implementation of a neural network classifier based on the saturating linear activation function

AU - Bermak, Amine

AU - Bouzerdoum, A.

PY - 2002

Y1 - 2002

N2 - This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.

AB - This paper presents a digital VLSI implementation of a feedforward neural network classifier based on the saturating linear activation function. The architecture consists of one-hidden layer performing the weighted sum followed by a saturating linear activation function. The hardware implementation of such a network presents a significant advantage in terms of circuit complexity as compared to a network based on a sigmoid activation function, but without compromising the classification performance. Simulation results on two benchmark problems show that feedforward neural networks with the saturating linearity perform as well as networks with the sigmoid activation function. The architecture can also handle variable precision resulting in a higher computational resources at lower precision.

UR - http://www.scopus.com/inward/record.url?scp=84964515073&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84964515073&partnerID=8YFLogxK

U2 - 10.1109/ICONIP.2002.1198207

DO - 10.1109/ICONIP.2002.1198207

M3 - Conference contribution

AN - SCOPUS:84964515073

VL - 2

SP - 981

EP - 985

BT - ICONIP 2002 - Proceedings of the 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age

PB - Institute of Electrical and Electronics Engineers Inc.

ER -