8 results on '"Engelbrecht, Andries P."'
Search Results
2. An Analysis of Activation Function Saturation in Particle Swarm Optimization Trained Neural Networks.
- Author
-
Dennis, Cody, Engelbrecht, Andries P., and Ombuki-Berman, Beatrice M.
- Subjects
PARTICLE swarm optimization ,HYPERBOLIC functions ,ARTIFICIAL neural networks ,TANGENT function - Abstract
The activation functions used in an artificial neural network define how nodes of the network respond to input, directly influence the shape of the error surface and play a role in the difficulty of the neural network training problem. Choice of activation functions is a significant question which must be addressed when applying a neural network to a problem. One issue which must be considered when selecting an activation function is known as activation function saturation. Saturation occurs when a bounded activation function primarily outputs values close to its boundary. Excessive saturation damages the network's ability to encode information and may prevent successful training. Common functions such as the logistic and hyperbolic tangent functions have been shown to exhibit saturation when the neural network is trained using particle swarm optimization. This study proposes a new measure of activation function saturation, evaluates the saturation behavior of eight common activation functions, and evaluates six measures of controlling activation function saturation in particle swarm optimization based neural network training. Activation functions that result in low levels of saturation are identified. For each activation function recommendations are made regarding which saturation control mechanism is most effective at reducing saturation. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. Time Series Forecasting Using Neural Networks: Are Recurrent Connections Necessary?
- Author
-
Abdulkarim, Salihu A. and Engelbrecht, Andries P.
- Subjects
RECURRENT neural networks ,PARTICLE swarm optimization ,ARTIFICIAL neural networks - Abstract
Artificial neural networks (NNs) are widely used in modeling and forecasting time series. Since most practical time series are non-stationary, NN forecasters are often implemented using recurrent/delayed connections to handle the temporal component of the time varying sequence. These recurrent/delayed connections increase the number of weights required to be optimized during training of the NN. Particle swarm optimization (PSO) is now an established method for training NNs, and was shown in several studies to outperform the classical backpropagation training algorithm. The original PSO was, however, designed for static environments. In dealing with non-stationary data, modified versions of PSOs for optimization in dynamic environments are used. These dynamic PSOs have been successfully used to train NNs on classification problems under non-stationary environments. This paper formulates training of a NN forecaster as dynamic optimization problem to investigate if recurrent/delayed connections are necessary in a NN time series forecaster when a dynamic PSO is used as the training algorithm. Experiments were carried out on eight forecasting problems. For each problem, a feedforward NN (FNN) is trained with a dynamic PSO algorithm and the performance is compared to that obtained from four different types of recurrent NNs (RNN) each trained using gradient descent, a standard PSO for static environments and the dynamic PSO algorithm. The RNNs employed were an Elman NN, a Jordan NN, a multirecurrent NN and a time delay NN. The performance of these forecasting models were evaluated under three different dynamic environmental scenarios. The results show that the FNNs trained with the dynamic PSO significantly outperformed all the RNNs trained using any of the other algorithms considered. These findings highlight that recurrent/delayed connections are not necessary in NNs used for time series forecasting (for the time series considered in this study) as long as a dynamic PSO algorithm is used as the training method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
4. Gramophone Noise Detection and Reconstruction Using Time Delay Artificial Neural Networks.
- Author
-
Stallmann, Christoph F. and Engelbrecht, Andries P.
- Subjects
- *
PHONOGRAPH , *ACOUSTIC emission , *ARTIFICIAL neural networks , *SOUND waves , *ALGORITHMS - Abstract
Gramophone records were the main recording medium for more than seven decades and regained widespread popularity over the past several years. Being an analog storage medium, gramophone records are subject to distortions caused by scratches, dust particles, degradation, and other means of improper handling. The observed noise often leads to an unpleasant listening experience and requires a filtering process to remove the unwanted disruptions and improve the audio quality. This paper proposes a novel approach that employs various feed forward time delay artificial neural networks to detect and reconstruct noise in musical sound waves. A set of 800 songs from eight different genres were used to validate the performance of the neural networks. The performance was analyzed according to the outlier detection and interpolation accuracy, the computational time and the tradeoff between the accuracy and the time. The empirical results of both detection and reconstruction neural networks were compared to a number of other algorithms, including various statistical measurements, duplication approaches, trigonometric processes, polynomials, and time series models. It was found that the neural networks’ outlier detection accuracy was slightly lower than some of the other noise identification algorithms, but achieved a more efficient tradeoff by detecting most of the noise in real time. The reconstruction process favored neural networks with an increase in the interpolation accuracy compared to other widely used time series models. It was also found that certain genres such as classical, country, and jazz music were interpolated more accurately. Volatile signals, such as electronic, metal, and pop music were more challenging to reconstruct and were substantially better interpolated using neural networks than the other examined algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
5. Particle Swarm Optimization Approaches to Coevolve Strategies for the Iterated Prisoner's Dilemma.
- Author
-
Franken, Nelis and Engelbrecht, Andries P.
- Subjects
PRISONER'S dilemma game ,SWARM intelligence ,ARTIFICIAL neural networks ,ALGORITHMS ,CHOICE (Psychology) - Abstract
This paper presents and investigates the application of coevolutionary training techniques based on particle swarm optimization (PSO) to evolve playing strategies for the nonzero sum problem of the iterated prisoner's dilemma (IPD). Three different roevolutionary PSO techniques are used, differing in the way that IPD strategies are presented: A neural network (NN) approach in which the NN is used to predict the next action, a binary PSO approach in which the particle represents a complete playing strategy, and finally, a novel approach that exploits the symmetrical structure of man-made strategies. The last technique uses a PSO algorithm as a function approximator to evolve a function that characterizes the dynamics of the IPD. These different PSO approaches are compared experimentally with one another, and with popular man-made strategies. The performance of these approaches is evaluated in both clean and noisy environments. Results indicate that NNs cooperate well, but may develop weak strategies that can cause catastrophic collapses. The binary PSO technique does not have the same deficiency, instead resulting in an overall state of equilibrium in which some strategies are allowed to exploit the population, but never dominate. The symmetry approach is not as successful as the binary PSO approach in maintaining cooperation in both noisy and noiseless environments--exhibiting selfish behavior against the benchmark strategies and depriving them of receiving almost any payoff. Overall, the PSO techniques are successful at generating a variety of strategies for use in the IPD, duplicating and improving on existing evolutionary IPD population observations. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
6. A Cooperative Approach to Particle Swarm Optimization.
- Author
-
Van den Bergh, Frans and Engelbrecht, Andries P.
- Subjects
MATHEMATICAL optimization ,COMPUTER algorithms ,EVOLUTIONARY computation ,ARTIFICIAL neural networks ,COMPUTER science - Abstract
The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
7. Learning to Play Games Using a PSO-Based Competitive Learning Approach.
- Author
-
Messerschmidt, Leon and Engelbrecht, Andries P.
- Subjects
MATHEMATICAL optimization ,EVOLUTIONARY computation ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science ,COMPUTER programming - Abstract
A new competitive approach is developed for learning agents to play two-agent games. This approach uses particle swarm optimizers (PSO) to train neural networks to predict the desirability of states in the leaf nodes of a game tree. The new approach is applied to the TicTacToe game, and compared with the performance of an evolutionary approach. A performance criterion is defined to quantify performance against that of players making random moves. The results show that the new PSO-based approach performs well as compared with the evolutionary approach. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
8. Sensitivity Analysis for Selective Learning by Feedforward Neural Networks.
- Author
-
Engelbrecht, Andries P.
- Subjects
- *
ACTIVE learning , *ARTIFICIAL neural networks - Abstract
Presents information on a study that focused on selective learning for feedforward neural networks. Overview on active learning; Mathematical model of selective learning; Comparison of the complexity of the selective learning algorithm with normal fixed set learning.
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.