396 results
Search Results
2. Common Asymptotic Behavior of Solutions and Almost Periodicity for Discontinuous, Delayed, and Impulsive Neural Networks.
- Author
-
Allegretto, Walter, Papini, Duccio, and Forti, Mauro
- Subjects
CYCLES ,DISCONTINUOUS functions ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,PERIODIC functions - Abstract
The paper considers a general neural network model with impulses at a given sequence of instants, discontinuous neuron activations, delays, and time-varying data and inputs. It is shown that when the neuron interconnections satisfy an M-matrix condition, or a dominance condition, then the state solutions and the output solutions display a common asymptotic behavior as time t → +∞. It is also shown, via a new technique based on prolonging the solutions of the delayed neural network to -∞, that it is possible to select a unique special solution that is globally exponentially stable and can be considered as the unique global attractor for the network. Finally, this paper shows that for almost periodic data and inputs the selected solution is almost periodic; moreover, it is robust with respect to a large class of perturbations of the data. Analogous results also hold for periodic data and inputs. A by-product of the analysis is that a sequence of almost periodic impulses is able to induce in the generic case (nonstationary) almost periodic solutions in an otherwise globally convergent nonimpulsive neural network. To the authors' knowledge the results in this paper are the only available results on global exponential stability of the unique periodic or almost periodic solution for a general neural network model combining three main features, i.e., impulses, discontinuous neuron activations and delays. The results in this paper are compared with several results in the literature dealing with periodicity or almost periodicity of some subclasses of the neural network model here considered and some hints for future work are given. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
3. Regularized Negative Correlation Learning for Neural Network Ensembles.
- Author
-
Huanhuan Chen and Xin Yao
- Subjects
ARTIFICIAL neural networks ,SET theory ,ALGORITHMS ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error/MSEI together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when λ = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overtitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter λ in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
4. The Q-Norm Complexity Measure and the Minimum Gradient Method: A Novel Approach to the Machine Learning Structural Risk Minimization Problem.
- Author
-
Vieira, D. A. G., Takahashi, Ricardo H. C., Palade, Vasile, Vasconcelos, J. A., and Caminhas, W. M.
- Subjects
MATHEMATICAL optimization ,MACHINE learning ,MACHINE theory ,ARTIFICIAL intelligence ,SELF-organizing systems ,ARTIFICIAL neural networks ,COMPUTATIONAL intelligence - Abstract
This paper presents a novel approach for dealing with the structural risk minimization (SRM) applied to a general setting of the machine learning problem. The formulation is based on the fundamental concept that supervised learning is a bi-objective optimization problem in which two conflicting objectives should be minimized. The objectives are related to the empirical training error and the machine complexity. In this paper, one general Q-norm method to compute the machine complexity is presented, and, as a particular practical case, the minimum gradient method (MGM) is derived relying on the definition of the fat-shattering dimension. A practical mechanism for parallel layer perceptron (PLP) network training, involving only quasi-convex functions, is generated using the aforementioned definitions. Experimental results on 15 different benchmarks are presented, which show the potential of the proposed ideas. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. Stability and Hopf Bifurcation of a General Delayed Recurrent Neural Network.
- Author
-
Wenwu Yu, Jinde Cao, and Guanrong Chen
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,EQUILIBRIUM ,STABILITY (Mechanics) ,HOPF algebras ,ALGEBRAIC topology - Abstract
In this paper, stability and bifurcation of a general recurrent neural network with multiple time delays is considered, where all the variables of the network can be regarded as bifurcation parameters. It is found that Hopf bifurcation occurs when these parameters pass through some critical values where the conditions for local asymptotical stability of the equilibrium are not satisfied. By analyzing the characteristic equation and using the frequency domain method, the existence of Hopf bifurcation is proved. The stability of bifurcating periodic solutions is determined by the harmonic balance approach, Nyquist criterion, and graphic Hopf bifurcation theorem. Moreover, a critical condition is derived under which the stability is not guaranteed, thus a necessary and sufficient condition for ensuring the local asymptotical stability is well understood, and from which the essential dynamics of the delayed neural network are revealed. Finally, numerical results are given to verify the theoretical analysis, and some interesting phenomena are observed and reported. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
6. Existence and Global Exponential Stability of Almost Periodic Solution for Cellular Neural Networks With Variable Coefficients and Time-Varying Delays.
- Author
-
Haijun Jiang, Long Zhang, and Zhidong Teng
- Subjects
LYAPUNOV functions ,ARTIFICIAL neural networks ,DIFFERENTIAL equations ,ARTIFICIAL intelligence ,CALCULUS ,EXPONENTIAL functions - Abstract
in this paper, we study cellular neural networks with almost periodic variable coefficients and time-varying delays. By using the existence theorem of almost periodic solution for general functional differential equations, introducing many real parameters and applying the Lyapunov functional method and the technique of Young inequality, we obtain some sufficient conditions to ensure the existence, uniqueness, and global exponential stability of almost periodic solution. The results obtained in this paper are new, useful, and extend and improve the existing ones in previous literature. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
7. Sensitivity to Noise in Bidirectional Associative Memory (BAM).
- Author
-
Du, Shengzhi, Zengqiang Chen, Zhuzhi Yuan, and Xinghui Zhang
- Subjects
MEMORY ,SENSORY perception ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Original Hebbian encoding scheme of bidirectional associative memory (BAM) provides a poor pattern capacity and recall performance. Based on Rosenblatt's perceptron learning algorithm, the pattern capacity of BAM is enlarged, and perfect recall of all training pattern pairs is guaranteed. However, these methods put their emphases on pattern capacity, rather than error correction capability which is another critical point of BAM. This paper analyzes the sensitivity to noise in RAM and obtains an interesting idea to improve noise immunity of BAM. Some researchers have found that the noise sensitivity of BAM relates to the minimum absolute value of net inputs (MAV). However, in this paper, the analysis on failure association shows that it is related not only to MAV but also to the variance of weights associated with synapse connections. In fact, it is a positive monotone increasing function of the quotient of MAV divided by the variance of weights. This idea provides an useful principle of improving error correction capability of RAM. Some revised encoding schemes, such as small variance learning for RAM (SVBAM), evolutionary pseudorelaxation learning for BAM (EPRLAB) and evolutionary bidirectional learning (EBL), have been introduced to illustrate the performance of this principle. All these methods perform better than their original versions in noise immunity. Moreover, these methods have no negative effect on the pattern capacity of BAM. The convergence of these methods is also discussed in this paper. If there exist solutions, EPRLAB and EBL always converge to a global optimal solution in the senses of both, pattern capacity and noise immunity. However, the convergence of SVBAM may be affected by a preset function. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
8. Probabilistic Sequential Independent Components Analysis.
- Author
-
Welling, Max, Zemel, Richard S., and Hinton, Geoffrey E.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,LEARNING ,GRAPHICAL modeling (Statistics) ,STOCHASTIC processes - Abstract
Under-complete models, which derive lower dimensional representations of input data, are valuable in domains in which the number of input dimensions is very large, such as data consisting of a temporal sequence of images. This paper presents the under-complete product of experts (UPoE), where each expert models a one-dimensional projection of the data. Maximum-likelihood learning rules for this model constitute a tractable and exact algorithm for learning under-complete independent components. The learning rules for this model coincide with approximate learning rules proposed earlier for under-complete independent component analysis (UICA) models. This paper also derives an efficient sequential learning algorithm from this model and discusses its relationship to sequential independent component analysis (ICA), projection pursuit density estimation, and feature induction algorithms for additive random field models. This paper demonstrates the efficacy of these novel algorithms on high-dimensional continuous datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
9. Design of Recurrent Neural Networks for Solving Constrained Least Absolute Deviation Problems.
- Author
-
Xiaolin Hu, Changyin Sun, and Bo Zhang
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,LEAST absolute deviations (Statistics) ,LEAST squares ,MATHEMATICAL optimization ,CHEBYSHEV approximation - Abstract
Recurrent neural networks for solving constrained least absolute deviation (LAD) problems or L1-norm optimization problems have attracted much interest in recent years. But so far most neural networks can only deal with some special linear constraints efficiently. In this paper, two neural networks are proposed for solving LAD problems with various linear constraints including equality, two-sided inequality and bound constraints. When tailored to solve some special cases of LAD problems in which not all types of constraints are present, the two networks can yield simpler architectures than most existing ones in the literature. In particular, for solving problems with both equality and one-sided inequality constraints, another network is invented. All of the networks proposed in this paper are rigorously shown to be capable of solving the corresponding problems. The different networks designed for solving the same types of problems possess the same structural complexity, which is due to the fact these architectures share the same computing blocks and only differ in connections between some blocks. By this means, some flexibility for circuits realization is provided. Numerical simulations are carried out to illustrate the theoretical results and compare the convergence rates of the networks. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
10. Dynamic Analysis of a General Class of Winner-Take-All Competitive Neural Networks.
- Author
-
Yuguang Fang, Cohen, Michael A., and Kincaid, Thomas G.
- Subjects
ARTIFICIAL neural networks ,NEURAL circuitry ,COGNITIVE neuroscience ,SEMICONDUCTORS ,METAL oxide semiconductor field-effect transistors ,ARTIFICIAL intelligence - Abstract
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have aWTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
11. Multistability and New Attraction Basins of Almost-Periodic Solutions of Delayed Neural Networks.
- Author
-
Lili Wang, Wenlian Lu, and Tianping Chen
- Subjects
STABILITY (Mechanics) ,ARTIFICIAL neural networks ,SET theory ,COMPUTER simulation ,ARTIFICIAL intelligence - Abstract
In this paper, we investigate multistability of almost-periodic solutions of recurrently connected neural networks with delays (simply called delayed neural networks). We will reveal that under some conditions, the space R
n can be divided into 2n sets, and in each subset, the delayed n-neuron neural network has a locally stable almost-periodic solution. Furthermore, we also investigate the attraction basins of these almost-periodic solutions. We reveal that the attraction basin of almost-periodic trajectory is larger than the subset, where the corresponding almost-periodic trajectory is located. In addition, several numerical simulations are presented to corroborate the theoretical results. [ABSTRACT FROM AUTHOR]- Published
- 2009
- Full Text
- View/download PDF
12. Large Memory Capacity in Chaotic Artificial Neural Networks: A View of the Anti-Integrable Limit.
- Author
-
Wei Lin and Guanrong Chen
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,SELF-organizing maps ,PERCEPTRONS - Abstract
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
13. Stability Analysis of Discrete-Time Recurrent Neural Networks With Stochastic Delay.
- Author
-
Yu Zhao, Huijun Gao, Lam, James, and Ke Chen
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,STOCHASTIC processes ,PROBABILITY theory - Abstract
This paper is concerned with the stability analysis of discrete-time recurrent neural networks (RNNS) with time delays as random variables drawn from some probability distribution. By introducing the variation probability of the time delay, a common delayed discrete-time RNN system is transformed into one with stochastic parameters. Improved conditions for the mean square stability of these systems are obtained by employing new Lyapunov functions and novel techniques are used to achieve delay dependence. The merit of the proposed conditions lies in its reduced conservatism, which is made possible by considering not only the range of the time delays, but also the variation probability distribution. A numerical example is provided to show the advantages of the proposed conditions. Stability Analysis of Discrete-Time Recurrent Neural Networks With Stochastic Delay. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
14. State Estimation for Coupled Uncertain Stochastic Networks With Missing Measurements and Time-Varying Delays: The Discrete-Time Case.
- Author
-
Jinling Liang, Zidong Wang, and Xiaohui Liu
- Subjects
COMPUTER networks ,ELECTRONIC data processing ,DISCRETE-time systems ,DIGITAL control systems ,LYAPUNOV functions ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence - Abstract
This paper is concerned with the problem of state estimation for a class of discrete-time coupled uncertain stochastic complex networks with missing measurements and time-varying delay. The parameter uncertainties are assumed to be norm-bounded and enter into both the network state and the network output. The stochastic Brownian motions affect not only the coupling term of the network but also the overall network dynamics. The nonlinear terms that satisfy the usual Lipschitz conditions exist in both the state and measurement equations. Through available output measurements described by a binary switching sequence that obeys a conditional probability distribution, we aim to design a state estimator to estimate the network states such that, for all admissible parameter uncertainties and time-varying delays, the dynamics of the estimation error is guaranteed to be globally exponentially stable in the mean square. By employing the Lyapunov functional method combined with the stochastic analysis approach, several delay-dependent criteria are established that ensure the existence of the desired estimator gains, and then the explicit expression of such estimator gains is characterized in terms of the solution to certain linear matrix inequalities (LMIs). Two numerical examples are exploited to illustrate the effectiveness of the proposed estimator design schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
15. Almost Sure Exponential Stability of Recurrent Neural Networks With Markovian Switching.
- Author
-
Yi Shen and Jun Wang
- Subjects
MARKOV processes ,STOCHASTIC processes ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,SWITCHING circuits ,DIGITAL electronics - Abstract
This paper presents new stability results for recurrent neural networks with Markovian switching. First, algebraic criteria for the almost sure exponential stability of recurrent neural networks with Markovian switching and without time delays are derived. The results show that the almost sure exponential stability of such a neural network does not require the stability of the neural network at every individual parametric configuration. Next, both delay-dependent and delay-independent criteria for the almost sure exponential stability of recurrent neural networks with time-varying delays and Markovian-switching parameters are derived by means of a generalized stochastic Halanay inequality. The results herein include existing ones for recurrent neural networks without Markovian switching as special cases. Finally, simulation results in three numerical examples are discussed to illustrate the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
16. Spurious Valleys in the Error Surface of Recurrent Networks--Analysis and Avoidance.
- Author
-
Horn, Jason, De Jesús, Orlando, and Hagan, Martin T.
- Subjects
BACK propagation ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,HYBRID systems ,STOCHASTIC processes - Abstract
This paper gives a detailed analysis of the error surfaces of certain recurrent networks and explains some difficulties encountered in training recurrent networks. We show that these error surfaces contain many spurious valleys, and we analyze the mechanisms that cause the valleys to appear. We demonstrate that the principle mechanism can be understood through the analysis of the roots of random polynomials. This paper also provides suggestions for improvements in batch training procedures that can help avoid the difficulties caused by spurious valleys, thereby improving training speed and reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
17. A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the κ-Winners-Take-All Problem.
- Author
-
Xiaolin Hu and Bo Zhang
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,QUADRATIC programming ,NONLINEAR programming ,NUMERICAL analysis ,LINEAR programming ,VECTOR analysis - Abstract
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
18. Learning Without Human Expertise: A Case Study of the Double Dummy Bridge Problem.
- Author
-
Mossakowski, Krzysztof and Mańdziuk, Jacek
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER simulation ,MACHINE learning ,COMPUTATIONAL learning theory ,CONTRACT bridge techniques ,DUMMY play (Contract bridge) - Abstract
Artificial neural networks, trained only on sample deals, without presentation of any human knowledge or even rules of the game, are used to estimate the number of tricks to be taken by one pair of bridge players in the so-called double dummy bridge problem (DDBP). Four representations of a deal in the input layer were tested leading to significant differences in achieved results. In order to test networks' abilities to extract knowledge from sample deals, experiments with additional inputs representing estimators of hand's strength used by humans were also performed. The superior network trained solely on sample deals outperformed all other architectures, including those using explicit human knowledge of the game of bridge. Considering the suit contracts, this network, in a sample of 100000 testing deals, output a perfect answer in 53.11% of the cases and only in 3.52% of them was mistaken by more than one trick. The respective figures for notrump contracts were equal to 37.80% and 16.36%. The above results were compared with the ones obtained by 24 professional human bridge players—members of The Polish Bridge Union—on test sets of sizes between 27 and 864 deals per player (depending on player's time availability). In case of suit contracts, the perfect answer was obtained in 53.06% of the testing deals for ten upper-classified players and in 48.66% of them, for the remaining 14 participants of the experiment. For the notruinp contracts, the respective figures were equal to 73.68% and 60.78%. Except for checking the ability of neural networks in solving the DDBP, the other goal of this research was to analyze connection weights in trained networks in a quest for weights' patterns that are explainable by experienced human bridge players. Quite surprisingly, several such patterns were discovered (e.g., preference for groups of honors, drawing special attention to Aces, favoring cards from a trump suit, gradual importance of cards in one suit—from two to the Ace, etc.). Both the numerical figures and weight patterns are stable and repeatable in a sample of neural architectures (differing only by randomly chosen initial weights). In summary, the piece of research described in this paper provides a detailed comparison between various data representations of the DDBP solved by neural networks. On a more general note, this approach can be extended to a certain class of binary classification problems. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
19. Robust Synchronization of an Array of Coupled Stochastic Discrete-Time Delayed Neural Networks.
- Author
-
Jinling Liang, Zidong Wang, Yurong Liu, and Xiaohui Liu
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,SELF-organizing maps ,KRONECKER products ,MATRICES (Mathematics) ,ROBUST control - Abstract
This paper is concerned with the robust synchronization problem for an array of coupled stochastic discrete-time neural networks with time-varying delay. The individual neural network is subject to parameter uncertainty, stochastic disturbance, and time-varying delay, where the norm-bounded parameter uncertainties exist in both the state and weight matrices, the stochastic disturbance is in the form of a scalar Wiener process, and the time delay enters into the activation function. For the array of coupled neural networks, the constant coupling and delayed coupling are simultaneously considered. We aim to establish easy-to-verify conditions under which the addressed neural networks are synchronized. By using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square. The LMI-based conditions obtained are dependent not only on the lower bound but also on the upper bound of the time-varying delay, and can be solved efficiently via the Matlab LMI Toolbox. Two numerical examples are given to demonstrate the usefulness of the proposed synchronization scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
20. IMORL: Incremental Multiple-Object Recognition and Localization.
- Author
-
Haibo He and Sheng Chen
- Subjects
STOCHASTIC processes ,STREAMING technology ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,DECISION making ,PROBLEM solving ,MATHEMATICAL models ,SEQUENTIAL analysis - Abstract
This paper proposes an incremental multiple-object recognition and localization (IMORL) method. The objective of IMORL is to adaptively learn multiple interesting objects in an image. Unlike the conventional multiple-object learning algorithms, the proposed method can automatically and adaptively learn from continuous video streams over the entire learning life. This kind of incremental learning capability enables the proposed approach to accumulate experience and use such knowledge to benefit future learning and the decision making process. Furthermore, IMORL can effectively handle variations in the number of instances in each data chunk over the learning life. Another important aspect analyzed in this paper is the concept drifting issue. In multiple-object learning scenarios, it is a common phenomenon that new interesting objects may be introduced during the learning life. To handle this situation, IMORL uses an adaptive learning principle to autonomously adjust to such new information. The proposed approach is independent of the base learning models, such as decision tree, neural networks, support vector machines, and others, which provide the flexibility of using this method as a general learning methodology in multiple-object learning scenarios. In this paper, we use a neural network with a multilayer perceptron (MLP) structure as the base learning model and test the performance of this method in various video stream data sets. Simulation results show the effectiveness of this method. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
21. A New Solution Path Algorithm in Support Vector Regression.
- Author
-
Gang Wang, Dit-Yan Yeung, and Lochovsky, Frederick H.
- Subjects
REGRESSION analysis ,COMPUTER algorithms ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,COMPUTATIONAL mathematics - Abstract
In this paper, regularization path algorithms were proposed as a novel approach to the model selection problem by exploring the path of possibly all solutions with respect to some regularization hyperparameter in an efficient way. This approach was later extended to a support vector regression (SVR) model called ∈-SVR. However, the method requires that the error parameter ∈ be set a priori. This is only possible if the desired accuracy of the approximation can be specified in advance. In this paper, we analyze the solution space for ∈-SVR and propose a new solution path algorithm, called ∈-path algorithm, which traces the solution path with respect to the hyperparameter ∈ rather than A. Although both two solution path algorithms possess the desirable piecewise linearity property, our ∈-path algorithm overcomes some limitations of the original λ-path algorithm and has more advantages. It is thus more appealing for practical use. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
22. Robust State Estimation for Uncertain Neural Networks With Time-Varying Delay.
- Author
-
He Huang, Gang Feng, and Cao, Jinde
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,NEURAL circuitry ,MATRICES (Mathematics) ,EVOLUTIONARY computation ,MACHINE theory ,SELF-organizing systems ,COMPUTATIONAL intelligence - Abstract
Abstract-The robust state estimation problem for a class of uncertain neural networks with time-varying delay is studied in this paper. The parameter uncertainties are assumed to be norm bounded. Based on a new bounding technique, a sufficient condition is presented to guarantee the existence of the desired state estimator for the uncertain delayed neural networks. The criterion is dependent on the size of the time-varying delay and on the size of the time derivative of the time-varying delay. It. is shown that the design of the robust state estimator for such neural networks can be achieved by solving a linear matrix inequality (LMI), which can be easily facilitated by using some standard numerical packages. Finally, two simulation examples are given to demonstrate the effectiveness of the developed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
23. Neurodynamic Programming and Zero-Sum Games for Constrained Control Systems.
- Author
-
Abu-Khalaf, Murad, Lewis, Frank L., and Jie Huang
- Subjects
BENCHMARKING (Management) ,NEURAL circuitry ,ARTIFICIAL neural networks ,AUTOMATIC control systems ,NONLINEAR systems ,ARTIFICIAL intelligence ,STOCHASTIC convergence - Abstract
Abstract-In this paper, neural networks are used along with two-player policy iterations to solve for the feedback strategies of a continuous-time zero-sum game that appears in L
2 -gain optimal control, suboptimal H∞ control, of nonlinear systems affine in input with the control policy having saturation constraints. The result is a closed-form representation, on a prescribed compact set chosen a priori, of the feedback strategies and the value function that solves the associated Hamilton-Jacobi-Isaacs (HJI) equation. The closed-loop stability, L2 -gain disturbance attenuation of the neural network saturated control feedback strategy, and uniform convergence results are proven. Finally, this approach is applied to the rotational/translational actuator (RTAC) nonlinear benchmark problem under actuator saturation, offering guaranteed stability and disturbance attenuation. [ABSTRACT FROM AUTHOR]- Published
- 2008
- Full Text
- View/download PDF
24. Fault-Tolerant Indirect Adaptive Neurocontrol for a Static Synchronous Series Compensator in a Power Network With Missing Sensor Measurements.
- Author
-
Wei Qiao, Harley, Ronald G., and Venayagamoorthy, Ganesh Kumar
- Subjects
NONLINEAR systems ,SENSOR networks ,DETECTORS ,SYSTEMS theory ,COMPUTER systems ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER software - Abstract
Abstract-Identification and control of nonlinear systems depend on the availability and quality of sensor measurements. Measurements can be corrupted or interrupted due to sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software (referred to as missing sensor measurements in this paper). This paper proposes a novel fault-tolerant indirect adaptive neurocontroller (FTIANC) for controlling a static synchronous series compensator (SSSC), which is connected to a power network. The FTIANC consists of a sensor evaluation and (missing sensor) restoration scheme (SERS), a radial basis function neuroidentifier (RBFNI), and a radial basis function neurocontroller (RBFNC). The SERS provides a set of fault-tolerant measurements to the RBFNI and RBFNC. The resulting FTIANC is able to provide fault-tolerant effective control to the SSSC when some crucial time-varying sensor measurements are not available. Simulation studies are carried out on a single machine infinite bus (SMIB) as well as on the IEEE 10-machine 39-bus power system, for the SSSC equipped with conventional PI controllers (CONVC) and the FTIANC without any missing sensors, as well as for the FTIANC with multiple missing sensors. Results show that the transient performances of the proposed FTIANC with and without missing sensors are both superior to the CONVC used by the SSSC (without any missing sensors) over a wide range of system operating conditions. The proposed fault-tolerant control is readily applicable to other plant models in power systems. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
25. Just-in-Time Adaptive Classifiers--Part I: Detecting Nonstationary Changes.
- Author
-
Alippi, Cesare and Roveri, Manuel
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,COMPUTATIONAL intelligence ,DEVELOPMENTAL biology ,CLASSIFICATION ,COMPUTER science - Abstract
Abstract-The stationarity requirement for the process generating the data is a common assumption in classifiers' design. When such hypothesis does not hold, e.g., in applications affected by aging effects, drifts, deviations, and faults, classifiers must react just in time, i.e., exactly when needed, to track the process evolution. The first step in designing effective just-in-time classifiers requires detection of the temporal instant associated with the process change, and the second one needs an update of the knowledge base used by the classification system to track the process evolution. This paper addresses the change detection aspect leaving the design of just-in-time adaptive classification systems to a companion paper. Two completely automatic tests for detecting nonstationarity phenomena are suggested, which neither require a priori information nor assumptions about the process generating the data. In particular, an effective computational intelligence-inspired test is provided to deal with multidimensional situations, a scenario where traditional change detection methods are generally not applicable or scarcely effective. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
26. Evaluation of the Traffic Parameters in a Metropolitan Area by Fusing Visual Perceptions and CNN Processing of Webcam Images.
- Author
-
Faro, Alberto, Giordano, Daniela, and Spampinato, Concetto
- Subjects
ARTIFICIAL neural networks ,ELECTRONIC data processing ,EMBEDDED computer systems ,COMPUTER architecture ,COMMUNICATIONS industries ,INFORMATION resources management ,INTEGRATED circuits ,ARTIFICIAL intelligence ,SELF-organizing maps - Abstract
This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. Absolute Exponential Stability of Recurrent Neural Networks With Generalized Activation Function.
- Author
-
Jun Xu, Yong-Yan Cao, Youxian Sun, and Jinshan Tang
- Subjects
ARTIFICIAL neural networks ,NEURAL circuitry ,ARTIFICIAL intelligence ,SELF-organizing systems ,EVOLUTIONARY computation ,MATRICES (Mathematics) ,EQUILIBRIUM - Abstract
In this paper, the recurrent neural networks (RNNs) with a generalized activation function class is proposed. In this proposed model, every component of the neuron's activation function belongs to a convex hull which is bounded by two odd symmetric piecewise linear functions that are convex or concave over the real space. All of the convex hulls are composed of generalized activation function classes. The novel activation function class is not only with a more flexible and more specific description of the activation functions than other function classes but it also generalizes some traditional activation function classes. The absolute exponential stability (AEST) of the RNN with a generalized activation function class is studied through three steps. The first step is to demonstrate the global exponential stability (GES) of the equilibrium point of original RNN with a generalized activation function being equivalent to that of RNN under all vertex functions of convex hull. The second step transforms the RNN under every vertex activation function into neural networks under an array of saturated linear activation functions. Because the GES of the equilibrium point of three systems are equivalent, the next stability analysis focuses on the GES of the equilibrium point of RNN system under an array of saturated linear activation functions. The last step is to study both the existence of equilibrium point and the GES of the RNN under saturated linear activation functions using the theory of M-matrix. In the end, a two-neuron RNN with a generalized activation function is constructed to show the effectiveness of our results. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
28. The Greatest Allowed Relative Error in Weights and Threshold of Strict Separating Systems.
- Author
-
Freixas, Josep and Molinero, Xavier
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,PERTURBATION theory ,APPROXIMATION theory ,COMPUTATIONAL mathematics - Abstract
An important consideration when applying neural networks is the sensitivity to weights and threshold in strict separating systems representing a linearly separable function. Perturbations may affect weights and threshold so that it is important to estimate the maximal percentage error in weights and threshold, which may be allowed without altering the linearly separable function. In this paper, we provide the greatest allowed bound which can be associated to every strict separating system representing a linearly separable function. The proposed bound improves the tolerance that Hu obtained. Furthermore, it is the greatest bound for any strict separating system. This is the reason why we call it the greatest tolerance. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
29. Preliminary Study on Wilcoxon Learning Machines.
- Author
-
Jer-Guang Hsieh, Yih-Lon Lin, and Jyh-Horng Jeng
- Subjects
MACHINE learning ,OUTLIERS (Statistics) ,ARTIFICIAL intelligence ,DATA editing ,REGRESSION analysis ,MULTIVARIATE analysis ,MATHEMATICAL statistics ,STATISTICAL sampling ,ARTIFICIAL neural networks - Abstract
As is well known ill statistics, the resulting linear regressors by using the rank-based Wilcoxon approach to linear regression problems are usually robust against (or insensitive to) outliers. This motivates us to introduce in this paper the Wilcoxon approach to the area of machine learning. Specifically, we investigate four new learning machines, namely Wilcoxon neural network (WNN), Wilcoxon generalized radial basis function network (WGRBFN), Wilcoxon fuzzy neural network (WFNN), and kernel-based Wilcoxon regressor (KWR). These provide alternative learning machines when faced with general nonlinear learning problems. Simple weights updating rules based on gradient descent will be derived. Some numerical examples will be provided to compare the robustness against outliers for various learning machines. Simulation results show that the Wilcoxon learning machines proposed in this paper have good robustness against outliers. We firmly believe that the Wilcoxon approach will provide a promising methodology for many machine learning problems. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. Convergence of Nonautonomous Cohen-Grossberg-Type Neural Networks With Variable Delays.
- Author
-
Zhaohui Yuan, Lihong Huang, Dewen Hu, and Bingwen Liu
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence ,STOCHASTIC convergence ,ELECTRONIC data processing ,EQUILIBRIUM - Abstract
This paper is concerned with the global convergence of the solutions of a nonautonomous. system with variable delays, arising from the description of the states of neurons in delayed Cohen-Grossberg type in a time-varying situation. By exploring intrinsic features between nonautonomous system and its asymptotic equation, several novel sufficient conditions are established to ensure that all solutions of the networks converge to a periodic function or a constant vector for delayed Cohen-Grossberg-type neural network (NN) models in time-varying situation. The results can be applied directly to group of NNs models including Hopfleld NNs, bidirectional association memory NNs, and cellular NNs. Our results are not only presented in terms of system parameters and can be easily verified but also are less restrictive than previously known criteria. Numerical simulations have also been presented to demonstrate the theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
31. Multiperiodicity and Attractivity of Delayed Recurrent Neural Networks With Unsaturating Piecewise Linear Transfer Functions.
- Author
-
Lei Zhang, Zhang Yi, and Jiali Yu
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTATIONAL intelligence ,DIFFERENTIABLE dynamical systems ,PIECEWISE linear topology ,DIFFERENTIAL equations ,ELECTRONIC data processing - Abstract
This paper studies multiperiodicity and attractivity for a class of recurrent neural networks (RNNs) with unsaturating piecewise linear transfer functions and variable delays. Using local inhibition, conditions for boundedness and global attractivity are established. These conditions allow coexistence of stable and unstable trajectories. Moreover, multiperiodicity of the network is investigated by using local invariant sets. It shows that under some interesting conditions, there exists one periodic trajectory in each invariant set which exponentially attracts all trajectories in that region correspondingly. Simulations are carried out to illustrate the theories. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
32. Hybrid Neurogenetic Approach for Stock Forecasting.
- Author
-
Yung-Keun Kwon and Byung-Ro Moon
- Subjects
ARTIFICIAL neural networks ,GENETIC algorithms ,STOCKS (Finance) ,ARTIFICIAL intelligence ,REGRESSION analysis ,DISCRIMINANT analysis - Abstract
In this paper, we propose a hybrid neurogenetic system for stock trading. A recurrent neural network (NN) having one hidden layer is used for the prediction model. The input features are generated from a number of technical indicators being used by financial experts. The genetic algorithm (GA) optimizes the NN's weights under a 2-D encoding and crossover. We devised a context-based ensemble method of NNs which dynamically changes on the basis of the test day's context. To reduce the time in processing mass data, we parallelized the GA on a Linux cluster system using message passing interface. We tested the proposed method with 36 companies in NYSE and NASDAQ for 13 years from 1992 to 2004. The neurogenetic hybrid showed notable improvement on the average over the buy-and-hold strategy and the context-based ensemble further improved the results. We also observed that some companies were more predictable than others, which implies that the proposed neurogenetic hybrid can be used for financial portfolio construction. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. A Method of Face Recognition Based on Fuzzy c-Means Clustering and Associated Sub-NNs.
- Author
-
Jianming Lu, Xue Yuan, and Yahagi, Takashi
- Subjects
HUMAN facial recognition software ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,FUZZY mathematics ,COMPUTER science - Abstract
The face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. In this paper, we present a method for face recognition based on parallel neural networks. Neural networks (NN5) have been widely used in various fields. However, the computing efficiency decreases rapidly if the scale of the NN increases. In this paper, a new method of face recognition based on fuzzy clustering and parallel NNs is proposed. The face patterns are divided into several small-scale neural networks based on fuzzy clustering and they are combined to obtain the recognition result. In particular, the proposed method achieved a 98.75% recognition accuracy for 240 patterns of 20 registrants and a 99.58% rejection rate for 240 patterns of 20 nonregistrants. Experimental results show that the performance of our new face-recognition method is better than those of the backpropagation NN (BPNN) system, the hard c-means (HCM) and parallel NNs system, and the pattern-matching system. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
34. Backpropagation Algorithms for a Broad Class of Dynamic Networks.
- Author
-
De Jesús, Orlando and Hagan, Martin T.
- Subjects
BACK propagation ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,COMPUTER science - Abstract
This paper introduces a general framework for describing dynamic neural networks—the layered digital dynamic network (LDDN). This framework allows the development of two general algorithms for computing the gradients and Jacobians for these dynamic networks: backpropagation-through-time (BPTT) and real-time recurrent learning (RTRL). The structure of the LDDN framework enables an efficient implementation of both algorithms for arbitrary dynamic networks. This paper demonstrates that the BPTT algorithm is more efficient for gradient calculations, but the RTRL algorithm is more efficient for Jacobian calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
35. Associative Learning in Hierarchical Self-Organizing Learning Arrays.
- Author
-
Starzyk, Janusz A., Zhen Zhu, and Yue Li
- Subjects
PAIRED associate learning ,PATTERN perception ,FORM perception ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning - Abstract
In this paper, we introduce feedback-based associative learning in self-organized learning arrays (SOLAR). SOLAR structures are hierarchically organized networks of sparsely connected neurons that define their own functions and select their interconnections locally. This paper provides a description of neuron self-organization and signal processing. Feedforward processing is used to make necessary correlations and learn the input patterns. Discovered associations between neuron inputs are used to generate feedback signals. These feedback signals, when propagated to the primary inputs, can establish the expected input values. This can be used for heteroassociative (HA) and autoassociative (AA) learning and pattern recognition. Example applications in HA learning are given. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
36. Global Exponential Stability and Global Convergence in Finite Time of Delayed Neural Networks With Infinite Gain.
- Author
-
Forti, Mauro, Nistri, Paolo, and Papini, Duccio
- Subjects
NEURAL circuitry ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,STOCHASTIC convergence ,DISTRIBUTION (Probability theory) ,EQUILIBRIUM - Abstract
This paper introduces a general class of neural networks with arbitrary constant delays in the neuron interconnections, and neuron activations belonging to the set of discontinuous monotone increasing and (possibly) unbounded functions. The discontinuities in the activations are an ideal model of the situation ai where the gain of the neuron amplifiers is very high and tends to infinity, while the delay accounts for the finite switching speed of the neuron amplifiers, or the finite signal propagation speed. It is known that the delay in combination with high-gain nonlinearities is a particularly harmful source of potential instability. The goal of this paper is to single out a subclass of the considered discontinuous neural networks for which stability is instead insensitive to the presence of a delay. More precisely, conditions are given under which there is a unique equilibrium point of the neural net. work, which is globally exponentially stable for the states, with a known convergence rate. The conditions are easily testable and independent of the delay. Moreover, global convergence in finite time of the state and output is investigated. In doing so, new interesting dynamical phenomena are highlighted with respect to the case without delay, which make the study of convergence in finite time significantly more difficult. The obtained results extend previous work on global stability of delayed neural networks with Lipschitz continuous neuron activations, and neural networks with discontinuous neuron activations but without delays. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
37. Image Shadow Removal Using Pulse Coupled Neural Network.
- Author
-
Xiaodong Gu, Daoheng Yu, and Liming Zhang
- Subjects
ARTIFICIAL neural networks ,COMPUTER simulation ,ARTIFICIAL intelligence ,SHADES & shadows ,SIMULATION methods & models ,NOISE - Abstract
This paper introduces an approach for image shadow removal by using pulse coupled neural network (PCNN), based on the phenomena of synchronous pulse bursts in the animal visual cortexes. Two shadow-removing criteria are proposed. These two criteria decide how to choose the optimal parameter (the linking strength 3). The computer simulation results of shadow removal based on PCNN show that if these two criteria are satisfied, shadows are removed completely and the shadow-removed images are almost as the same as the original nonshadowed images. The shadow removal results are independent of changes of intensities of shadows in some range. and variations of the places of shadows. When the first criterion is satisfied, even if the second criterion is not satisfied, as to natural grey images that have abundant grey levels, shadows also can be removed and PCNN shadow-removed images retain the shapes of the objects in original images. These two criteria also can be used for color Images by dividing a color image into three channels (R, G, B). For shadows varying drastically, such as the noisy points in images, these two criteria are still right, but difficult to satisfy. Therefore, this approach can efficiently remove shadows that do not include the random noise. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
38. A Generalized Growing and Pruning RBF (GGAP-RBF) Neural Network for Function Approximation.
- Author
-
Huang, Guang-Bin, Saratchandran, P., and Sundararajan, Narasimhan
- Subjects
RADIAL basis functions ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science ,APPROXIMATION theory - Abstract
This paper presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation re- suits for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
39. Identification and Control of Dynamical Systems Using the Self-Organizing Map.
- Author
-
Barreto, Guilherme A. and Araújo, Aluizio F.R.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,AUTOMATIC control systems ,CONTROL theory (Engineering) ,SYSTEM analysis ,INFORMATION theory - Abstract
In this paper, we introduce a general modeling technique, called vector-quantized temporal associative memory (VQTAM), which uses Kohonen's self-organizing map (SOM) as an alternative to multilayer perceptron (MLP) and radial basis function (RBF) neural models for dynamical system identification and control. We demonstrate that the estimation errors decrease as the SOM training proceeds, allowing the VQTAM scheme to be understood as a self-supervised gradient-based error reduction method. The performance of the proposed approach is evaluated on a variety of complex tasks, namely: i) time series prediction; ii) identification of SISO/MIMO systems; and iii) nonlinear predictive control. For all tasks, the simulation results produced by the SOM are as accurate as those produced by the MLP network, and better than those produced by the RBF network. The SOM has also shown to be less sensitive to weight initialization than MLP networks. We conclude the paper by discussing the main properties of the VQTAM and their relationships. to other well established methods for dynamical system identification. We also suggest directions for further work. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
40. A Transient-Chaotic Autoassociative Network (TCAN) Based on Le Oscillators.
- Author
-
Lee, Raymond S.T.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,DIGITAL computer simulation ,ELECTRONIC data processing ,SIMULATION methods & models ,COMPUTER software - Abstract
In the past few decades, neural networks have been extensively adopted in various applications ranging from simple synaptic memory coding to sophisticated pattern recognition problems such as scene analysis. Moreover, current studies on neuroscience and physiology have reported that in a typical scene segmentation problem our major senses of perception (e.g., vision, olfaction, etc.) are highly involved in temporal (or what we call "transient") nonlinear neural dynamics and oscillations. This paper is an extension of the author's previous work on the dynamic neural model (EGDLM) of memory processing and on coin, site neural oscillators for scene segmentation. Moreover, it is inspired by the work of Aihara et aL and Wang on chaotic neural oscillators in pat. tern association. In this paper, the author proposes a new transient chaotic neural oscillator, namely the "Lee oscillator," to provide temporal neural coding and an information processing scheme. To illustrate its capability for memory association, a chaotic autoassociative network, namely the Transient-Chaotic Auto-associative Network (TCAN) was constructed based on the Lee oscillator. Different from classical autoassociators such as the celebrated Hopfield network, which provides a "time-independent" pattern association, the TCAN provides a remarkable progressive memory association scheme [what we call "progressive memory recalling" (PMR)] during the transient chaotic memory association. This is exactly consistent with the latest research in psychiatry and perception psychology on dynamic memory recalling schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
41. Robust and Adaptive Backstepping Control for Nonlinear Systems Using RBF Neural Networks.
- Author
-
Yahui Li, Sheng Qiang, Xianyi Zhuang, and Kaynak, Okyay
- Subjects
ARTIFICIAL neural networks ,SYSTEMS theory ,NONLINEAR systems ,ADAPTIVE control systems ,ARTIFICIAL intelligence ,SIMULATION methods & models - Abstract
In this paper, two different backstepping neural network (NN) control approaches are presented for a class of affine nonlinear systems in the strict-feedback form with unknown nonlinearities. By a special design scheme, the controller singularity problem is avoided perfectly in both approaches. Furthermore, the closed loop signals are guaranteed to be semiglobally uniformly ultimately bounded and the outputs of the system are proved to converge to a small neighborhood of the desired trajectory. The control performances of the closed-loop systems can be shaped as desired by suitably choosing the design parameters. Simulation results obtained demonstrate the effectiveness of the approaches proposed. The differences observed between the inputs of the two controllers are analyzed briefly. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
42. Combining Expert Neural Networks Using Reinforcement Feedback for Learning Primitive Grasping Behavior.
- Author
-
Moussa, Medhat A.
- Subjects
ARCHITECTURE ,MODULES (Algebra) ,DYNAMICS ,ARTIFICIAL neural networks ,ROBOTS ,DATABASES ,ARTIFICIAL intelligence - Abstract
This paper present an architecture for combining a mixture of experts. The architecture has two unique features: 1) it assumes no prior knowledge of the size or structure of the mixture and allows the number of experts to dynamically expand during training, and 2) reinforcement feedback is used to guide the combining/expansion operation. The architecture is particularly suitable for applications when there is a need to approximate a many-to-many mapping. An example of such a problem is the task of training a robot to grasp arbitrarily shaped objects. This task requires the approximation of a many-to-many mapping, since various configurations can be used to grasp an object, and several objects can share the same grasping configuration. Experiments in a simulated environment using a 28-object database showed how the algorithm dynamically combined and expanded a mixture of neural networks to achieve the learning task. The paper also presents a comparison with two other nonlearning approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
43. A Neural Network for a Class of Convex Quadratic Minimax Problems With Constraints.
- Author
-
Xing-Bao Gao, Li-Zhi Liao, and Weimin Xue
- Subjects
STOCHASTIC convergence ,MATHEMATICAL functions ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,METHOD of steepest descent (Numerical analysis) ,ASYMPTOTIC expansions - Abstract
In this paper, we propose a neural network for solving a class of convex quadratic minimax problems with constraints. Four sufficient conditions are provided to ensure the asymptotic stability of the proposed network. Furthermore, the exponential stability of the proposing network is also proved under certain conditions. The results obtained here can be further extended to the globally projected dynamical system. In addition, some new stability conditions for the system are also obtained. Since our stability conditions can be easily checked in practice, these results becomes more attractive in real applications. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
44. Generalized Regression Neural Networks in Time-Varying Environment.
- Author
-
Rutkowski, Leszek
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,REGRESSION analysis ,MATHEMATICAL statistics ,ORTHOGONAL functions ,KERNEL functions - Abstract
The current state of knowledge regarding nonstationary processes is significantly poorer then in the case of stationary signals. In many applications, signals are treated as stationary only because in this way it is easier to analyze them; in fact, they are nonstationary. Nonstationary processes are undoubtedly more difficult to analyze and their diversity makes application of universal tools impossible. In this paper we propose a new class of generalized regression neural networks working in nonstationary environment. The generalized regression neural networks (GRNN) studied in this paper are able to follow changes of the best model, i.e., time-varying regression functions. The novelty is summarized as follows: 1) We present adaptive GRNN tracking time-varying regression functions. 2) We prove convergence of the GRNN based on general learning theorems presented in Section IV. 3) We design in detail special GRNN based on the Parzen and orthogonal series kernels. In each case we precise conditions ensuring convergence of the GRNN to the best models described by regression function. 4) We investigate speed of convergence of the GRNN and compare performance of specific structures based on the Parzen kernel and orthogonal series kernel. 5) We study various noustationarities (multiplicative, additive, "scale change," "movable argument") and design in each case the GRNN based on the Parzen kernel and orthogonal series kernel. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
45. On Encoding and Enumerating Threshold Functions.
- Author
-
Žunić, Joviša
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,MATHEMATICAL functions ,SELF-organizing maps ,COMPUTER software - Abstract
In this paper, we deal with encoding and enumerating threshold functions defined on n-dimensional binary inputs. The paper specifies situations in which the unique characterization of functions from a given class is preserved by usage of an appropriate set of discrete moments. Moreover, sometimes such a characterization (coding) is optimal with respect to the number of necessary bit rate per coded function. By estimating the number of possible values of the discrete moments used, several upper bounds (for different classes of threshold functions) are derived, some of which are better than those previously known. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
46. Robust Adaptive Neural Network Control for a Class of Uncertain MIMO Nonlinear Systems With Input Nonlinearities.
- Author
-
Mou Chen, Shuzhi Sam Ge, and Voon Ee How, Bernard
- Subjects
NONLINEAR systems ,MIMO systems ,ARTIFICIAL neural networks ,SYSTEMS theory ,ARTIFICIAL intelligence ,LYAPUNOV functions - Abstract
In this paper, robust adaptive neural network (NN) control is investigated for a general class of uncertain multiple-input-multiple-output (MIMO) nonlinear systems with unknown control coefficient matrices and input nonlinearities. For nonsymmetric input nonlinearities of saturation and deadzone, variable structure control (VSC) in combination with backstepping and Lyapunov synthesis is proposed for adaptive NN control design with guaranteed stability. In the proposed adaptive NN control, the usual assumption on nonsingularity of NN approximation for unknown control coefficient matrices and boundary assumption between NN approximation error and control input have been eliminated. Command filters are presented to implement physical constraints on the virtual control laws, then the tedious analytic computations of time derivatives of virtual control laws are canceled. It is proved that the proposed robust backstepping control is able to guarantee semiglobal uniform ultimate boundedness of all signals in the closed-loop system. Finally, simulation results are presented to illustrate the effectiveness of the proposed adaptive NN control. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
47. Learning Nonsparse Kernels by Self-Organizing Maps for Structured Data.
- Author
-
Aiolli, Fabio, Martino, Giovanni Da San, Hagenbuchner, Markus, and Sperduti, Alessandro
- Subjects
KERNEL functions ,SELF-organizing maps ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
The development of neural network (NN) models able to encode structured input, and the more recent definition of kernels for structures, makes it possible to directly apply machine learning approaches to generic structured data. However, the effectiveness of a kernel can depend on its sparsity with respect a specific data set. In fact, the accuracy of a kernel method typically reduces as the kernel sparsity increases. The sparsity problem is particularly common in structured domains involving discrete variables which may take on many different values. In this paper, we explore this issue on two well-known kernels for trees, and propose to face it by recurring to self-organizing maps (SOMs) for structures. Specifically, we show that a suitable combination of the two approaches, obtained by defining a new class of kernels based on the activation map of a SOM for structures, can be effective in avoiding the sparsity problem and results in a system that can be significantly more accurate for categorization tasks on structured data. The effectiveness of the proposed approach is demonstrated experimentally on two relatively large corpora of XML formatted data and a data set of user sessions extracted from website logs. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
48. Fault Detection and Diagnosis Based on Modeling and Estimation Methods.
- Author
-
Huang, Sunan and Kok Kiong Tan
- Subjects
NONLINEAR systems ,SYSTEMS theory ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,FAULT-tolerant computing ,INTEGRATED circuit fault tolerance ,RADIAL basis functions - Abstract
This paper investigates the problem of fault detection and diagnosis in a class of nonlinear systems with modeling uncertainties. A nonlinear observer is first designed for monitoring fault. Radial basis function (RBF) neural network is used in this observer to approximate the unknown nonlinear dynamics. When a fault occurs, another RBF is triggered to capture the nonlinear characteristics of the fault function. The fault model obtained by the second neural network (NN) can be used for identifying the failure mode by comparing it with any known failure modes. Finally, a simulation example is presented to illustrate the effectiveness of the proposed scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
49. Spatio--Temporal Memories for Machine Learning: A Long-Term Memory Organization.
- Author
-
Starzyk, Janusz A. and He, Haibo
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,MACHINE theory ,MARKOV processes ,COMPUTER architecture ,COMPUTER storage devices - Abstract
Design of artificial neural structures capable of reliable and flexible long-term spatio-temporal memory is of paramount importance in machine intelligence. To this end, we propose a novel, biologically inspired, long-term memory (LTM) architecture. We intend to use it as a building block of a neuron-level architecture that is able to mimic natural intelligence through learning, anticipation, and goal-driven behavior. A mutual input enhancement and blocking structure is proposed, and its operation is discussed in detail. The paper focuses on a hierarchical memory organization, storage, recognition, and recall mechanisms. Simulation results of the proposed memory show its effectiveness, adaptability, and robustness. Accuracy of the proposed method is compared to other methods including Levenshtein distance method and a Markov chain. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
50. Neural Network Control of Multifingered Robot Hands Using Visual Feedback.
- Author
-
Yu Zhao and Chien Chern Cheah
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE learning ,SUPPORT vector machines ,ROBOT control systems ,ROBOT kinematics - Abstract
It is interesting to observe that humans are able to manipulate an object easily and skillfully without the exact knowledge of the object, contact points, or kinematics of our fingers. However, research so far on multifingered robot control has assumed that the kinematics and contact points of the fingers are known exactly. In many applications of multifingered robot hands, the kinematics and contact points of the fingers are uncertain and structures of the Jacobian matrices are unknown. In this paper, we propose an adaptive neural network (NN) Jacobian controller for multifingered robot hand with uncertainties in kinematics, Jacobian matrices, and dynamics. It is shown that using NNs, the uniform ultimate boundedness of the position error can be achieved in the presence of the uncertainties. Simulation results are presented to illustrate the performance of the proposed controller. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.