Competitive neural networks on message-passing parallel computers

Michele Ceccarelli, Alfredo Petrosino, Roberto Vaccaro

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

The paper reports two techniques for parallelizing on a MIMD multicomputer a class of learning algorithms (competitive learning) for artificial neural networks widely used in pattern recognition and understanding. The first technique presented, following the divide et impera strategy, achieves O(n/p+log P) time for n neurons and P processors interconnected as a tree. A modification of the algorithm allows the application of a systolic technique with the processors interconnected as a ring; this technique has the advantage that the communication time does not depend on the number of processors. The two techniques are also compared on the basis of predicted and measured performance on a transputer-based MIMD machine. As the number of processors grows the advantage of the systolic approach increases. On the contrary, the divide et impera approach is more advantageous in the retrieving phase.

Original languageEnglish
Pages (from-to)449-470
Number of pages22
JournalConcurrency Practice and Experience
Volume5
Issue number6
Publication statusPublished - 1 Sep 1993
Externally publishedYes

    Fingerprint

ASJC Scopus subject areas

  • Engineering(all)

Cite this