1,846 results
Search Results
2. A neural network-based model for paper currency recognition and verification.
- Author
-
Frosini, Angelo and Gori, Marco
- Subjects
- *
BANK notes , *ARTIFICIAL neural networks , *IDENTIFICATION - Abstract
Describes the neural-based recognition and verification techniques used in a banknote machine implemented for accepting paper currency of different countries. Basis of the perception mechanism; Sensors for banknote perception; Connectionist models of banknote verification; Results showing the effectiveness of neural technologies.
- Published
- 1996
- Full Text
- View/download PDF
3. Paper Index for the IEEE TRANSACTIONS ON NEURAL NETWORKS Special Issue on Hardware Implementations.
- Subjects
- *
ARTIFICIAL neural networks , *INDEXES , *PERIODICALS - Abstract
Lists names of various papers, published in the journal "IEE Transaction on Neutral Networks" as of September 01, 2003, which exclusively deal with neural network system and the related works on it.
- Published
- 2003
- Full Text
- View/download PDF
4. Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods.
- Subjects
- *
ARTIFICIAL neural networks , *ONLINE education , *KERNEL functions , *ADAPTIVE filters , *SUPPORT vector machines , *REGRESSION analysis , *PRINCIPAL components analysis - Published
- 2011
- Full Text
- View/download PDF
5. Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods.
- Subjects
- *
PUBLICATIONS , *ARTIFICIAL neural networks , *ONLINE education , *KERNEL functions , *PRINCIPAL components analysis , *ADAPTIVE computing systems - Published
- 2011
- Full Text
- View/download PDF
6. Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods.
- Subjects
- *
PUBLICATIONS , *ARTIFICIAL neural networks , *KERNEL functions , *ALGORITHMS , *MANUSCRIPTS , *ADAPTIVE computing systems - Published
- 2011
- Full Text
- View/download PDF
7. Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods.
- Subjects
- *
ARTIFICIAL neural networks , *ONLINE education , *KERNEL functions , *ADAPTIVE filters , *PRINCIPAL components analysis , *LITERATURE reviews , *APPROXIMATION theory - Published
- 2011
- Full Text
- View/download PDF
8. Call for papers IEEE Transactions on Neural Networks Special Issue: Online Learning in Kernel Methods.
- Subjects
- *
ARTIFICIAL neural networks , *SCIENCE periodicals , *PERIODICAL publishing , *ONLINE education , *KERNEL functions , *REGRESSION analysis , *PRINCIPAL components analysis - Published
- 2010
- Full Text
- View/download PDF
9. IEEE Transactions on Neural Networks information for authors.
- Subjects
AUTHOR-publisher relations ,SCIENCE periodicals ,PUBLISHING ,ARTIFICIAL neural networks ,ALGORITHMS - Published
- 2011
- Full Text
- View/download PDF
10. IEEE Transactions on Neural Networks information for authors.
- Subjects
COMPUTER science periodicals ,PERIODICAL publishing ,PUBLISHING ,ARTIFICIAL neural networks ,COMPUTER software ,MANUSCRIPTS ,MACHINE learning - Published
- 2011
- Full Text
- View/download PDF
11. IEEE Transactions on Neural Networks information for authors.
- Subjects
AUTHOR-publisher relations ,PERIODICAL publishing ,PUBLISHING ,COPYRIGHT of periodicals ,MANUSCRIPTS ,ARTIFICIAL neural networks ,RATES - Published
- 2011
- Full Text
- View/download PDF
12. IEEE Transactions on Neural Networks information for authors.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence periodicals ,PERIODICAL publishing ,PUBLISHING ,COMPUTER software ,MANUSCRIPTS - Published
- 2011
- Full Text
- View/download PDF
13. IEEE Transactions on Neural Networks information for authors.
- Subjects
ARTIFICIAL neural networks ,PUBLISHING ,COMPUTER software ,COMPUTER input-output equipment ,ARTIFICIAL intelligence ,SELF-organizing systems ,ALGORITHMS - Published
- 2011
- Full Text
- View/download PDF
14. IEEE Transactions on Neural Networks information for authors.
- Subjects
ARTIFICIAL neural networks ,AUTHORS ,ARTIFICIAL intelligence periodicals ,PERIODICAL publishing ,PUBLISHING ,LITERATURE reviews - Published
- 2011
- Full Text
- View/download PDF
15. IEEE Transactions on Neural Networks information for authors.
- Subjects
PUBLICATIONS ,AUTHORS ,ARTIFICIAL neural networks ,MANUSCRIPTS ,COPYRIGHT ,SURVEYS - Published
- 2011
- Full Text
- View/download PDF
16. IEEE Transactions on Neural Networks information for authors.
- Subjects
PUBLICATIONS ,ARTIFICIAL neural networks ,AUTHORS ,MANUSCRIPTS ,PUBLISHING - Published
- 2011
- Full Text
- View/download PDF
17. IEEE Transactions on Neural Networks information for authors.
- Subjects
PERIODICALS ,ARTIFICIAL neural networks ,AUTHORS ,SCIENCE periodical publishing ,MANUSCRIPTS ,MACHINE learning ,COMMUNICATION & technology - Published
- 2011
- Full Text
- View/download PDF
18. IEEE Transactions on Neural Networks information for authors.
- Subjects
ARTIFICIAL neural networks ,INFORMATION theory ,AUTHORS ,MACHINE learning ,SCIENCE periodicals ,PERIODICAL publishing - Published
- 2010
- Full Text
- View/download PDF
19. Subgradient-Based Neural Networks for Nonsmooth Nonconvex Optimization Problems.
- Author
-
Wei Bian and Xiaoping Xue
- Subjects
ARTIFICIAL neural networks ,NONCONVEX programming ,STOCHASTIC convergence ,MATHEMATICAL optimization ,ALGORITHMS ,SIGNAL processing - Abstract
This paper presents a subgradient-based neural network to solve a nonsmooth nonconvex optimization problem with a nonsmooth nonconvex objective function, a class of affine equality constraints, and a class of nonsmooth convex inequality constraints. The proposed neural network is modeled with a differential inclusion. Under a suitable assumption on the constraint set and a proper assumption on the objective function, it is proved that for a sufficiently large penalty parameter, there exists a unique global solution to the neural network and the trajectory of the network can reach the feasible region in finite time and stay there thereafter. It is proved that the trajectory of the neural network converges to the set which consists of the equilibrium points of the neural network, and coincides with the set which consists of the critical points of the objective function in the feasible region. A condition is given to ensure the convergence to the equilibrium point set in finite time. Moreover, under suitable assumptions, the coincidence between the solution to the differential inclusion and the "slow solution" of it is also proved. Furthermore, three typical examples are given to present the effectiveness of the theoretic results obtained in this paper and the good performance of the proposed neural network. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
20. Ranked Centroid Projection: A Data Visualization Approach With Self-Organizing Maps.
- Author
-
Yen, Gary G. and Zheng Wu
- Subjects
ARTIFICIAL neural networks ,SELF-organizing maps ,TEXT mining ,VISUAL programming languages (Computer science) ,CONTENT mining ,SELF-organizing systems ,VECTOR analysis ,DATA mining ,ENCODING - Abstract
The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
21. Improving Procedures for Evaluation of Connectionist Context-Free Language Predictors.
- Author
-
Jacobson, Henrik and Ziemke, Tom
- Subjects
ARTIFICIAL neural networks ,NEURAL circuitry ,ARTIFICIAL intelligence ,SYSTEMS engineering ,EVOLUTIONARY computation - Abstract
Shows how seemingly minor differences in training and evaluation procedures used in some studies of recurrent neural networks as context free language predictors can lead to significant differences in apparent network performance. Experimental methods; Results of the study; Conclusion.
- Published
- 2003
22. A New Formulation for Feedforward Neural Networks.
- Author
-
Razavi, Saman and Tolson, Bryan A.
- Subjects
FEEDFORWARD control systems ,ARTIFICIAL neural networks ,APPROXIMATION theory ,MACHINE learning ,RANDOM variables ,RESPONSE surfaces (Statistics) - Abstract
Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
23. Common Asymptotic Behavior of Solutions and Almost Periodicity for Discontinuous, Delayed, and Impulsive Neural Networks.
- Author
-
Allegretto, Walter, Papini, Duccio, and Forti, Mauro
- Subjects
CYCLES ,DISCONTINUOUS functions ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,PERIODIC functions - Abstract
The paper considers a general neural network model with impulses at a given sequence of instants, discontinuous neuron activations, delays, and time-varying data and inputs. It is shown that when the neuron interconnections satisfy an M-matrix condition, or a dominance condition, then the state solutions and the output solutions display a common asymptotic behavior as time t → +∞. It is also shown, via a new technique based on prolonging the solutions of the delayed neural network to -∞, that it is possible to select a unique special solution that is globally exponentially stable and can be considered as the unique global attractor for the network. Finally, this paper shows that for almost periodic data and inputs the selected solution is almost periodic; moreover, it is robust with respect to a large class of perturbations of the data. Analogous results also hold for periodic data and inputs. A by-product of the analysis is that a sequence of almost periodic impulses is able to induce in the generic case (nonstationary) almost periodic solutions in an otherwise globally convergent nonimpulsive neural network. To the authors' knowledge the results in this paper are the only available results on global exponential stability of the unique periodic or almost periodic solution for a general neural network model combining three main features, i.e., impulses, discontinuous neuron activations and delays. The results in this paper are compared with several results in the literature dealing with periodicity or almost periodicity of some subclasses of the neural network model here considered and some hints for future work are given. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
24. Regularized Negative Correlation Learning for Neural Network Ensembles.
- Author
-
Huanhuan Chen and Xin Yao
- Subjects
ARTIFICIAL neural networks ,SET theory ,ALGORITHMS ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Negative correlation learning (NCL) is a neural network ensemble learning algorithm that introduces a correlation penalty term to the cost function of each individual network so that each neural network minimizes its mean square error/MSEI together with the correlation of the ensemble. This paper analyzes NCL and reveals that the training of NCL (when λ = 1) corresponds to training the entire ensemble as a single learning machine that only minimizes the MSE without regularization. This analysis explains the reason why NCL is prone to overtitting the noise in the training set. This paper also demonstrates that tuning the correlation parameter λ in NCL by cross validation cannot overcome the overfitting problem. The paper analyzes this problem and proposes the regularized negative correlation learning (RNCL) algorithm which incorporates an additional regularization term for the whole ensemble. RNCL decomposes the ensemble's training objectives, including MSE and regularization, into a set of sub-objectives, and each sub-objective is implemented by an individual neural network. In this paper, we also provide a Bayesian interpretation for RNCL and provide an automatic algorithm to optimize regularization parameters based on Bayesian inference. The RNCL formulation is applicable to any nonlinear estimator minimizing the MSE. The experiments on synthetic as well as real-world data sets demonstrate that RNCL achieves better performance than NCL, especially when the noise level is nontrivial in the data set. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
25. The Q-Norm Complexity Measure and the Minimum Gradient Method: A Novel Approach to the Machine Learning Structural Risk Minimization Problem.
- Author
-
Vieira, D. A. G., Takahashi, Ricardo H. C., Palade, Vasile, Vasconcelos, J. A., and Caminhas, W. M.
- Subjects
MATHEMATICAL optimization ,MACHINE learning ,MACHINE theory ,ARTIFICIAL intelligence ,SELF-organizing systems ,ARTIFICIAL neural networks ,COMPUTATIONAL intelligence - Abstract
This paper presents a novel approach for dealing with the structural risk minimization (SRM) applied to a general setting of the machine learning problem. The formulation is based on the fundamental concept that supervised learning is a bi-objective optimization problem in which two conflicting objectives should be minimized. The objectives are related to the empirical training error and the machine complexity. In this paper, one general Q-norm method to compute the machine complexity is presented, and, as a particular practical case, the minimum gradient method (MGM) is derived relying on the definition of the fat-shattering dimension. A practical mechanism for parallel layer perceptron (PLP) network training, involving only quasi-convex functions, is generated using the aforementioned definitions. Experimental results on 15 different benchmarks are presented, which show the potential of the proposed ideas. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
26. Stability and Hopf Bifurcation of a General Delayed Recurrent Neural Network.
- Author
-
Wenwu Yu, Jinde Cao, and Guanrong Chen
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,EQUILIBRIUM ,STABILITY (Mechanics) ,HOPF algebras ,ALGEBRAIC topology - Abstract
In this paper, stability and bifurcation of a general recurrent neural network with multiple time delays is considered, where all the variables of the network can be regarded as bifurcation parameters. It is found that Hopf bifurcation occurs when these parameters pass through some critical values where the conditions for local asymptotical stability of the equilibrium are not satisfied. By analyzing the characteristic equation and using the frequency domain method, the existence of Hopf bifurcation is proved. The stability of bifurcating periodic solutions is determined by the harmonic balance approach, Nyquist criterion, and graphic Hopf bifurcation theorem. Moreover, a critical condition is derived under which the stability is not guaranteed, thus a necessary and sufficient condition for ensuring the local asymptotical stability is well understood, and from which the essential dynamics of the delayed neural network are revealed. Finally, numerical results are given to verify the theoretical analysis, and some interesting phenomena are observed and reported. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
27. Multilayer Perceptrons: Approximation Order and Necessary Number of Hidden Units.
- Author
-
Trenn, Stephan
- Subjects
NONLINEAR systems ,POLYNOMIALS ,SYSTEMS theory ,VOLTERRA equations ,VOLTERRA series ,ARTIFICIAL neural networks ,COMPUTATIONAL mathematics - Abstract
This paper considers the approximation of sufficiently smooth multivariable functions with a multilayer perceptron (MLP). For a given approximation order, explicit formulas for the necessary number of hidden units and its distributions to the hidden layers of the MLP are derived. These formulas depend only on the number of input variables and on the desired approximation order. The concept of approximation order encompasses Kolmogorov-Gabor polynomials or discrete Volterra series, which are widely used in static and dynamic models of nonlinear systems. The results are obtained by considering structural properties of the Taylor polynomials of the function in question and of the MLP function. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
28. Output Feedback Stabilization for Time-Delay Nonlinear Interconnected Systems Using Neural Networks.
- Author
-
Changchun Hua and Xinping Guan
- Subjects
INTEGRATED circuit interconnections ,TIME delay systems ,ELECTRONIC feedback ,ELECTRONIC controllers ,APPROXIMATION theory ,ARTIFICIAL neural networks - Abstract
In this paper, dynamic output feedback control problem is investigated for a class of nonlinear interconnected systems with time delays. Decentralized observer independent of the time delays is first designed. Then, we employ the bounds information of uncertain interconnections to construct the decentralized output feedback controller via backstepping design method. Based on Lyapunov stability theory, we show that the designed controller can render the closed-loop system asymptotically stable with the help of the changing supplying function idea. Furthermore, the corresponding decentralized control problem is considered under the case that the bounds of uncertain interconnections are not precisely known. By employing the neural network approximation theory, we construct the neural network output feedback controller with corresponding adaptive law. The resulting closed-loop system is stable in the sense of semiglobal boundedness. The observers and controllers constructed in this paper are independent of the time delays. Finally, simulations are done to verify the effectiveness of the theoretic results obtained. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
29. Global Exponential Stability of Bidirectional Associative Memory Neural Networks With Time Delays.
- Author
-
Xin-Ge Liu, Martin, Ralph R., Min Wu, and Mei-Lan Tang
- Subjects
EXPONENTIAL functions ,ARTIFICIAL neural networks ,TIME delay systems ,LYAPUNOV functions ,STOCHASTIC convergence ,MATHEMATICAL models ,LIPSCHITZ spaces - Abstract
In this paper, we consider delayed bidirectional associative memory (BAM) neural networks (NNs) with Lipschitz continuous activation functions. By applying Young's inequality and Holder's inequality techniques together with the properties of monotonic continuous functions, global exponential stability criteria are established for BAM NNs with time delays. This is done through the use of a new Lyapunov functional and an M-matrix. The results obtained in this paper extend and improve previous results. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. Neural-Network-Based Approximate Output Regulation of Discrete-Time Nonlinear Systems.
- Author
-
Weiyao Lan and Jie Huang
- Subjects
DISCRETE-time systems ,ARTIFICIAL neural networks ,NONLINEAR control theory ,NONLINEAR systems ,APPROXIMATION theory ,FEEDFORWARD control systems - Abstract
The existing approaches to the discrete-time non-linear output regulation problem rely on the offline solution of a set of mixed nonlinear functional equations known as discrete regulator equations. For complex nonlinear systems, it is difficult to solve the discrete regulator equations even approximately. Moreover, for systems with uncertainty, these approaches cannot offer a reliable solution. By combining the approximation capability of the feedforward neural networks (NN5) with an online parameter optimization mechanism, we develop an approach to solving the discrete nonlinear output regulation problem without solving the discrete regulator equations explicitly. The approach of this paper can be viewed as a discrete counterpart of our previous paper on approximately solving the continuous-time nonlinear output regulation problem. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Discrete-Time Adaptive Backstepping Nonlinear Control via High-Order Neural Networks.
- Author
-
Alanis, Alma Y., Sanchez, Edgar N., and Loukianov, Alexander G.
- Subjects
DISCRETE-time systems ,NONLINEAR control theory ,NONLINEAR systems ,ARTIFICIAL neural networks ,KALMAN filtering ,ELECTRIC inductors - Abstract
This paper deals with adaptive tracking for discrete-time multiple-input-multiple-output (MIMO) nonlinear systems in presence of bounded disturbances. In this paper, a high-order neural network (HONN) structure is used to approximate a con- trol law designed by the backstepping technique, applied to a block strict feedback form (BSFF). This paper also includes the respective stability analysis, on the basis of the Lyapunov approach, for the whole controlled system, including the extended Kalman filter (EKF)-based NN learning algorithm. Applicability of the scheme is illustrated via simulation for a discrete-time nonlinear model of an electric induction motor. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
32. Robust/Optimal Temperature Profile Control of a High-Speed Aerospace Vehicle Using Neural Networks.
- Author
-
Yadav, Vivek, Padhi, Radhakant, and Balakrishnan, S. N.
- Subjects
DYNAMIC programming ,ARTIFICIAL neural networks ,TEMPERATURE control ,AEROSPACE planes ,FEEDBACK control systems ,NONLINEAR systems - Abstract
An approximate dynamic programming (ADP)-based suboptimal neurocontroller to obtain desired temperature for a high-speed aerospace vehicle is synthesized in this paper. A 1-D distributed parameter model of a fin is developed from basic thermal physics principles. ‘Snapshot’ solutions of the dynamics are generated with a simple dynamic inversion-based feedback controller. Empirical basis functions are designed using the ‘proper orthogonal decomposition’ (POD) technique and the snapshot solutions. A low-order nonlinear lumped parameter system to characterize the infinite dimensional system is obtained by carrying out a Galerkin projection. An ADP-based neurocontroller with a dual heuristic programming (DHP) formulation is obtained with a single-network-adaptive-critic (SNAC) controller for this approximate nonlinear model. Actual control in the original domain is calculated with the same POD basis functions through a reverse mapping. Further contribution of this paper includes development of an online robust neurocontroller to account for unmodeled dynamics and parametric uncertainties inherent in such a complex dynamic system. A neural network (NN) weight update rule that guarantees boundedness of the weights and relaxes the need for persistence of excitation (PE) condition is presented. Simulation studies show that in a fairly extensive but compact domain, any desired temperature profile can be achieved starting from any initial temperature profile. Therefore, the ADP and NN-based controllers appear to have the potential to become controller synthesis tools for nonlinear distributed parameter systems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. A New, Adaptive Backpropagation Algorithm Based on Lyapunov Stability Theory for Neural Networks.
- Author
-
Zhihong Man, Hong Ren Wu, Liu, Sophie, and Xinghuo Yu
- Subjects
BACK propagation ,ARTIFICIAL neural networks ,LYAPUNOV stability ,CONTROL theory (Engineering) ,MACHINE theory ,LYAPUNOV functions - Abstract
A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks is developed in this paper. It is shown that the candidate of a Lyapunov function V(k) of the tracking error between the output of a neural network and the desired reference signal is chosen first, and the weights of the neural network are then updated, from the output layer to the input layer, in the sense that ΔV(κ) = V(κ) — V(κ — 1) < 0. The output tracking error can then asymptotically converge to zero according to Lyapunov stability theory. Unlike gradient-based BP training algorithms, the new Lyapunov adaptive BP algorithm in this paper is not used for searching the global minimum point along the cost-function surface in the weight space, but it is aimed at constructing an energy surface with a single global minimum point through the adaptive adjustment of the weights as the time goes to infinity. Although a neural network may have bounded input disturbances, the effects of the disturbances can be eliminated, and asymptotic error convergence can be obtained. The new Lyapunov adaptive BP algorithm is then applied to the design of an adaptive filter in the simulation example to show the fast error convergence and strong robustness with respect to large bounded input disturbances. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. Existence and Global Exponential Stability of Almost Periodic Solution for Cellular Neural Networks With Variable Coefficients and Time-Varying Delays.
- Author
-
Haijun Jiang, Long Zhang, and Zhidong Teng
- Subjects
LYAPUNOV functions ,ARTIFICIAL neural networks ,DIFFERENTIAL equations ,ARTIFICIAL intelligence ,CALCULUS ,EXPONENTIAL functions - Abstract
in this paper, we study cellular neural networks with almost periodic variable coefficients and time-varying delays. By using the existence theorem of almost periodic solution for general functional differential equations, introducing many real parameters and applying the Lyapunov functional method and the technique of Young inequality, we obtain some sufficient conditions to ensure the existence, uniqueness, and global exponential stability of almost periodic solution. The results obtained in this paper are new, useful, and extend and improve the existing ones in previous literature. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
35. Sensitivity to Noise in Bidirectional Associative Memory (BAM).
- Author
-
Du, Shengzhi, Zengqiang Chen, Zhuzhi Yuan, and Xinghui Zhang
- Subjects
MEMORY ,SENSORY perception ,ALGORITHMS ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,COMPUTER science - Abstract
Original Hebbian encoding scheme of bidirectional associative memory (BAM) provides a poor pattern capacity and recall performance. Based on Rosenblatt's perceptron learning algorithm, the pattern capacity of BAM is enlarged, and perfect recall of all training pattern pairs is guaranteed. However, these methods put their emphases on pattern capacity, rather than error correction capability which is another critical point of BAM. This paper analyzes the sensitivity to noise in RAM and obtains an interesting idea to improve noise immunity of BAM. Some researchers have found that the noise sensitivity of BAM relates to the minimum absolute value of net inputs (MAV). However, in this paper, the analysis on failure association shows that it is related not only to MAV but also to the variance of weights associated with synapse connections. In fact, it is a positive monotone increasing function of the quotient of MAV divided by the variance of weights. This idea provides an useful principle of improving error correction capability of RAM. Some revised encoding schemes, such as small variance learning for RAM (SVBAM), evolutionary pseudorelaxation learning for BAM (EPRLAB) and evolutionary bidirectional learning (EBL), have been introduced to illustrate the performance of this principle. All these methods perform better than their original versions in noise immunity. Moreover, these methods have no negative effect on the pattern capacity of BAM. The convergence of these methods is also discussed in this paper. If there exist solutions, EPRLAB and EBL always converge to a global optimal solution in the senses of both, pattern capacity and noise immunity. However, the convergence of SVBAM may be affected by a preset function. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
36. Probabilistic Sequential Independent Components Analysis.
- Author
-
Welling, Max, Zemel, Richard S., and Hinton, Geoffrey E.
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,ALGORITHMS ,LEARNING ,GRAPHICAL modeling (Statistics) ,STOCHASTIC processes - Abstract
Under-complete models, which derive lower dimensional representations of input data, are valuable in domains in which the number of input dimensions is very large, such as data consisting of a temporal sequence of images. This paper presents the under-complete product of experts (UPoE), where each expert models a one-dimensional projection of the data. Maximum-likelihood learning rules for this model constitute a tractable and exact algorithm for learning under-complete independent components. The learning rules for this model coincide with approximate learning rules proposed earlier for under-complete independent component analysis (UICA) models. This paper also derives an efficient sequential learning algorithm from this model and discusses its relationship to sequential independent component analysis (ICA), projection pursuit density estimation, and feature induction algorithms for additive random field models. This paper demonstrates the efficacy of these novel algorithms on high-dimensional continuous datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. Exponential Synchronization of Complex Networks With Finite Distributed Delays Coupling.
- Author
-
Hu, Cheng, Yu, Juan, Jiang, Haijun, and Teng, Zhidong
- Subjects
COMPUTATIONAL complexity ,ARTIFICIAL neural networks ,COMPUTER simulation ,CONTROL theory (Engineering) ,SYNCHRONIZATION ,COMPARATIVE studies - Abstract
In this paper, the exponential synchronization for a class of complex networks with finite distributed delays coupling is studied via periodically intermittent control. Some novel and useful criteria are derived by utilizing a different technique compared with some correspondingly previous results. As a special case, some sufficient conditions ensuring the exponential synchronization for a class of coupled neural networks with distributed delays are obtained. Furthermore, a feasible region of the control parameters is derived for the realization of exponential synchronization. It is worth noting that the synchronized state in this paper is not an isolated node but a non-decoupled state, in which the inner coupling matrix and the degree of the nodes play a central role. Additionally, the traditional assumptions on control width, non-control width, and discrete delays are removed in our results. Finally, some numerical simulations are given to demonstrate the effectiveness of the proposed control method. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
38. Delay-Independent Stability of Genetic Regulatory Networks.
- Author
-
Wu, Fang-Xiang
- Subjects
TIME delay systems ,STABILITY (Mechanics) ,ARTIFICIAL neural networks ,NONLINEAR differential equations ,RNA splicing ,GENETIC regulation ,EIGENVALUES - Abstract
Genetic regulatory networks can be described by nonlinear differential equations with time delays. In this paper, we study both locally and globally delay-independent stability of genetic regulatory networks, taking messenger ribonucleic acid alternative splicing into consideration. Based on nonnegative matrix theory, we first develop necessary and sufficient conditions for locally delay-independent stability of genetic regulatory networks with multiple time delays. Compared to the previous results, these conditions are easy to verify. Then we develop sufficient conditions for global delay-independent stability for genetic regulatory networks. Compared to the previous results, this sufficient condition is less conservative. To illustrate theorems developed in this paper, we analyze delay-independent stability of two genetic regulatory networks: a real-life repressilatory network with three genes and three proteins, and a synthetic gene regulatory network with five genes and seven proteins. The simulation results show that the theorems developed in this paper can effectively determine the delay-independent stability of genetic regulatory networks. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
39. Textual and Visual Content-Based Anti-Phishing: A Bayesian Approach.
- Author
-
Zhang, Haijun, Liu, Gang, Chow, Tommy W. S., and Liu, Wenyin
- Subjects
BAYESIAN analysis ,PHISHING ,WEBSITES ,CLASSIFICATION ,ARTIFICIAL neural networks ,ALGORITHMS ,FEATURE extraction ,MULTISENSOR data fusion - Abstract
A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
40. Parallel Reservoir Computing Using Optical Amplifiers.
- Author
-
Vandoorne, Kristof, Dambre, Joni, Verstraeten, David, Schrauwen, Benjamin, and Bienstman, Peter
- Subjects
OPTICAL amplifiers ,ARTIFICIAL neural networks ,ELECTRIC network topology ,PHOTONICS ,PHASE shift (Nuclear physics) ,SPEECH perception ,SEMICONDUCTORS ,INTEGRATED optics - Abstract
Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the system's physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
41. Rapid Detection of Small Oscillation Faults via Deterministic Learning.
- Author
-
Wang, Cong and Chen, Tianrui
- Subjects
MACHINE learning ,UNCERTAINTY (Information theory) ,APPROXIMATION theory ,NONLINEAR theories ,FAULT tolerance (Engineering) ,OSCILLATIONS ,RADIAL basis functions ,SIMULATION methods & models - Abstract
Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L1 norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
42. Convergence Dynamics of Stochastic Cohen–Grossberg Neural Networks With Unbounded Distributed Delays.
- Author
-
Huang, Chuangxia and Cao, Jinde
- Subjects
STOCHASTIC convergence ,ARTIFICIAL neural networks ,STOCHASTIC processes ,DISTRIBUTED algorithms ,NONLINEAR theories ,INTEGRO-differential equations ,STABILITY (Mechanics) - Abstract
This paper addresses the issue of the convergence dynamics of stochastic Cohen–Grossberg neural networks (SCGNNs) with white noise, whose state variables are described by stochastic nonlinear integro-differential equations. With the help of Lyapunov functional, semi-martingale theory, and inequality techniques, some novel sufficient conditions on pth moment exponential stability and almost sure exponential stability for SCGNN are given. Furthermore, as byproducts of our main results, some sufficient conditions for checking stability of deterministic CGNNs with unbounded distributed delays have been established. Especially, even when the spectral radius of the coefficient matrix is greater than 1, in some cases our theory is also effective. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
43. Uniformly Stable Backpropagation Algorithm to Train a Feedforward Neural Network.
- Author
-
de Jesus Rubio, José, Angelov, Plamen, and Pacheco, Jaime
- Subjects
BACK propagation ,ARTIFICIAL neural networks ,ALGORITHMS ,FEEDFORWARD control systems ,UNIFORM distribution (Probability theory) ,PREDICTION models ,DISCRETE-time systems ,NONLINEAR systems - Abstract
Neural networks (NNs) have numerous applications to online processes, but the problem of stability is rarely discussed. This is an extremely important issue because, if the stability of a solution is not guaranteed, the equipment that is being used can be damaged, which can also cause serious accidents. It is true that in some research papers this problem has been considered, but this concerns continuous-time NN only. At the same time, there are many systems that are better described in the discrete time domain such as population of animals, the annual expenses in an industry, the interest earned by a bank, or the prediction of the distribution of loads stored every hour in a warehouse. Therefore, it is of paramount importance to consider the stability of the discrete-time NN. This paper makes several important contributions. 1) A theorem is stated and proven which guarantees uniform stability of a general discrete-time system. 2) It is proven that the backpropagation (BP) algorithm with a new time-varying rate is uniformly stable for online identification and the identification error converges to a small zone bounded by the uncertainty. 3) It is proven that the weights' error is bounded by the initial weights' error, i.e., overfitting is eliminated in the proposed algorithm. 4) The BP algorithm is applied to predict the distribution of loads that a transelevator receives from a trailer and places in the deposits in a warehouse every hour, so that the deposits in the warehouse are reserved in advance using the prediction results. 5) The BP algorithm is compared with the recursive least square (RLS) algorithm and with the Takagi–Sugeno type fuzzy inference system in the problem of predicting the distribution of loads in a warehouse, giving that the first and the second are stable and the third is unstable. 6) The BP algorithm is compared with the RLS algorithm and with the Kalman filter algorithm in a synthetic example. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
44. Computing and Analyzing the Sensitivity of MLP Due to the Errors of the i.i.d. Inputs and Weights Based on CLT.
- Author
-
Yang, Sheng-Sung, Ho, Chia-Lu, and Siu, Sammy
- Subjects
PERCEPTRONS ,SENSITIVITY analysis ,ALGORITHMS ,CENTRAL limit theorem ,NEURONS ,MEASUREMENT errors ,ARTIFICIAL neural networks ,GAUSSIAN distribution - Abstract
In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
45. New Approach for the Identification and Validation of a Nonlinear F/A-18 Model by Use of Neural Networks.
- Author
-
Boely, Nicolas and Botez, Ruxandra Mihaela
- Abstract
This paper presents a new approach for identifying and validating the F/A-18 aeroservoelastic model, based on flight flutter tests. The neural network (NN), trained with five different flight flutter cases, is validated using 11 other flight flutter test (FFT) data. A total of 16 FFT cases were obtained for all three flight regimes (subsonic, transonic, and supersonic) at Mach numbers ranging between 0.85 and 1.30 and at altitudes of between 5000 and 25 000 ft. The results obtained highlight the efficiency of the multilayer perceptron NN in model identification. Optimization of the NN requires mixing of two proprieties: the hidden layer size reduction and four-layered NN performances. This paper shows that a four-layer NN with only 16 neurons is enough to create an accurate model. The fit coefficients were higher than 92% for both the identification and the validation test data, thus demonstrating accuracy of the NN. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
46. Equivalences Between Neural-Autoregressive Time Series Models and Fuzzy Systems.
- Author
-
Aznarte, José Luis and Benítez, José Manuel
- Subjects
AUTOREGRESSION (Statistics) ,FUZZY systems ,TIME series analysis ,SOFT computing ,SWITCHING theory ,ARTIFICIAL neural networks - Abstract
Soft computing (SC) emerged as an integrating framework for a number of techniques that could complement one another quite well (artificial neural networks, fuzzy systems, evolutionary algorithms, probabilistic reasoning). Since its inception, a distinctive goal has been to dig out the deep relationships among their components. This paper considers two wide families of SC models. On the one hand, the regime-switching autoregressive paradigm is a recent development in statistical time series modeling, and it includes a set of models closely related to artificial neural networks. On the other hand, we consider fuzzy rule-based systems in the framework of time series analysis. This paper discloses original results establishing functional equivalences between models of these two classes, and hence opens the door to a productive line of research where results and techniques from one area can be applied in the other. As a consequence of the equivalences presented in this paper, we prove the asymptotic stationarity of a class of fuzzy rule-based systems. Simulations based on information criteria show the importance of the selection of the proper membership function. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
47. Design of Recurrent Neural Networks for Solving Constrained Least Absolute Deviation Problems.
- Author
-
Xiaolin Hu, Changyin Sun, and Bo Zhang
- Subjects
ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,LEAST absolute deviations (Statistics) ,LEAST squares ,MATHEMATICAL optimization ,CHEBYSHEV approximation - Abstract
Recurrent neural networks for solving constrained least absolute deviation (LAD) problems or L1-norm optimization problems have attracted much interest in recent years. But so far most neural networks can only deal with some special linear constraints efficiently. In this paper, two neural networks are proposed for solving LAD problems with various linear constraints including equality, two-sided inequality and bound constraints. When tailored to solve some special cases of LAD problems in which not all types of constraints are present, the two networks can yield simpler architectures than most existing ones in the literature. In particular, for solving problems with both equality and one-sided inequality constraints, another network is invented. All of the networks proposed in this paper are rigorously shown to be capable of solving the corresponding problems. The different networks designed for solving the same types of problems possess the same structural complexity, which is due to the fact these architectures share the same computing blocks and only differ in connections between some blocks. By this means, some flexibility for circuits realization is provided. Numerical simulations are carried out to illustrate the theoretical results and compare the convergence rates of the networks. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
48. Exponential Stabilization of Neural Networks with Various Activation Functions and Mixed Time-Varying Delays.
- Author
-
Phat, V. N. and Trinh, H.
- Subjects
ARTIFICIAL neural networks ,MATHEMATICAL functions ,DELAY differential equations ,FUNCTIONALS ,EXPONENTS ,MATRICES (Mathematics) - Abstract
This paper presents some results on the global exponential stabilization for neural networks with various activation functions and time-varying continuously distributed delays. Based on augmented time-varying Lyapunov--Krasovskii functionals, new delay-dependent conditions for the global exponential stabilization are obtained in terms of linear matrix inequalities. A numerical example is given to illustrate the feasibility of our results. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
49. Novel Maximum-Margin Training Algorithms for Supervised Neural Networks.
- Author
-
Ludwig, Oswaldo and Nunes, Urbano
- Subjects
ARTIFICIAL neural networks ,EVOLUTIONARY computation ,ALGORITHMS ,INFORMATION theory ,PATTERN recognition systems ,PATTERN perception ,SUPERVISED learning - Abstract
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on L
p -norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N³) and space complexity O(N²), where is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate. [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
50. Dynamic Analysis of a General Class of Winner-Take-All Competitive Neural Networks.
- Author
-
Yuguang Fang, Cohen, Michael A., and Kincaid, Thomas G.
- Subjects
ARTIFICIAL neural networks ,NEURAL circuitry ,COGNITIVE neuroscience ,SEMICONDUCTORS ,METAL oxide semiconductor field-effect transistors ,ARTIFICIAL intelligence - Abstract
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have aWTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.