Competitive neural networks on message-passing parallel computers

Michele Ceccarelli, Alfredo Petrosino, Roberto Vaccaro

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.

Original languageEnglish
Pages (from-to)449-470
Number of pages22
JournalConcurrency Practice and Experience
Volume5
Issue number6
Publication statusPublished - 1 Sep 1993
Externally publishedYes

Fingerprint

Transputers
Message passing
Parallel Computers
Message Passing
Learning algorithms
Neurons
Pattern recognition
Neural Networks
Neural networks
Communication
Divides
Competitive Learning
Multicomputers
Pattern Recognition
Artificial Neural Network
Learning Algorithm
Neuron
Ring

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Competitive neural networks on message-passing parallel computers. / Ceccarelli, Michele; Petrosino, Alfredo; Vaccaro, Roberto.

In: Concurrency Practice and Experience, Vol. 5, No. 6, 01.09.1993, p. 449-470.

Research output: Contribution to journalArticle

Ceccarelli, Michele ; Petrosino, Alfredo ; Vaccaro, Roberto. / Competitive neural networks on message-passing parallel computers. In: Concurrency Practice and Experience. 1993 ; Vol. 5, No. 6. pp. 449-470.
@article{a8bbfa17749a4b0ab3cc38f22b6a8e3a,
title = "Competitive neural networks on message-passing parallel computers",
abstract = "The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.",
author = "Michele Ceccarelli and Alfredo Petrosino and Roberto Vaccaro",
year = "1993",
month = "9",
day = "1",
language = "English",
volume = "5",
pages = "449--470",
journal = "Concurrency Computation Practice and Experience",
issn = "1532-0626",
publisher = "John Wiley and Sons Ltd",
number = "6",

}

TY - JOUR

T1 - Competitive neural networks on message-passing parallel computers

AU - Ceccarelli, Michele

AU - Petrosino, Alfredo

AU - Vaccaro, Roberto

PY - 1993/9/1

Y1 - 1993/9/1

N2 - The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.

AB - The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.

UR - http://www.scopus.com/inward/record.url?scp=0027659781&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0027659781&partnerID=8YFLogxK

M3 - Article

VL - 5

SP - 449

EP - 470

JO - Concurrency Computation Practice and Experience

JF - Concurrency Computation Practice and Experience

SN - 1532-0626

IS - 6

ER -