1. Parallel implementation of backpropagation neural networks on a heterogeneous array of transputers
- Author
-
Foo, Shou King, Saratchandran, P., and Sundararajan, N.
- Subjects
Neural networks -- Training ,Transputers -- Research ,Algorithms -- Research ,Integer programming -- Usage - Abstract
This paper analyzes parallel implementation of the backpropagation training algorithm on a heterogeneous transputer network (i.e., transputers of different speed and memory) connected in a pipelined ring topology. Training-set parallelism is employed as the parallelizing paradigm for the backpropagation algorithm. It is shown through analysis that finding the optimal allocation of the training patterns amongst the processors to minimize the time for a training epoch is a mixed integer programming problem. Using mixed integer programming optimal pattern allocations for heterogeneous processor networks having a mixture of T805-20 (20 MHz) and T805-25 (25 MHz) transputers are theoretically found out for two benchmark problems. The time for an epoch corresponding to the optimal pattern allocations is then obtained experimentally for the benchmark problems from the T805-20, T805-25 heterogeneous networks. A Monte Carlo simulation study is carried out to statistically verify the optimality of the epoch time obtained from the mixed integer programming based allocations. In this study pattern allocations are randomly generated and the corresponding time for an epoch is experimentally obtained from the heterogeneous network. The mean and standard deviation for the epoch times from the random allocations are then compared with the optimal epoch time. The results show the optimal epoch time to be always lower than the mean epoch times by more than three standard deviations (3[Sigma]) for all the sample sizes used in the study thus giving validity to the theoretical analysis.
- Published
- 1997