150 results
Search Results
52. Multi-level Independent Component Analysis.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Kim, Woong Myung, Park, Chan Ho, and Lee, Hyon Soo
- Abstract
This paper presents a new method which uses multi-level density estimation technique to generate score function in ICA (independent Component Analysis). Score function is very closely related with density function in information theoretic ICA. We tried to solve mismatch of marginal densities by controlling the number of kernels. Also, we insert a constraint that can satisfy sufficient condition to guarantee asymptotic stability. Multi-level ICA uses kernel density estimation method in order to derive differential equation of source adaptively score function by original signals. To increase speed of kernel density estimation, we used FFT algorithm after changing density formula to convolution form. Proposed multi-level score function generation method reduces estimate error which is density difference between recovered signals and original signals. We estimate density function more similar to original signals compared with existent other algorithms in blind source separation problem and get improved performance in the SNR measurement. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
53. An Adaptive Support Vector Machine Learning Algorithm for Large Classification Problem.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yu, Shu, Yang, Xiaowei, Hao, Zhifeng, and Liang, Yanchun
- Abstract
Based on the incremental and decremental learning strategies, an adaptive support vector machine learning algorithm (ASVM) is presented for large classification problems in this paper. In the proposed algorithm, the incremental and decremental procedures are performed alternatively, and a small scale working set, which can cover most of the information in the training set and overcome the drawback of losing the sparseness in least squares support vector machine (LS-SVM), can be formed adaptively. The classifier can be constructed by using this working set. In general, the number of the elements in the working set is much smaller than that in the training set. Therefore the proposed algorithm can be used not only to train the data sets quickly but also to test them effectively with losing little accuracy. In order to examine the training speed and the generalization performance of the proposed algorithm, we apply both ASVM and LS-SVM to seven UCI datasets and a benchmark problem. Experimental results show that the novel algorithm is very faster than LS-SVM and loses little accuracy in solving large classification problems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
54. A Boosting SVM Chain Learning for Visual Information Retrieval.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yuan, Zejian, Yang, Lei, Qu, Yanyun, Liu, Yuehu, and Jia, Xinchun
- Abstract
Training strategy for negative sample collection and robust learning algorithm for large-scale samples set are critical issues for visual information retrieval problem. In this paper, an improved one class support vector classifier (SVC) and its boosting chain learning algorithm is proposed. Different from the one class SVC, this algorithm considers negative samples information, and integrates the bootstrap training and boosting algorithm into its learning procedure. The performances of the SVC can be successively boosted by repeat important sampling large negative set. Compared with traditional methods, it has the merits of higher detection rate and lower false positive rate, and is suitable for object detection and information retrieval. Experimental results show that the proposed boosting SVM chain learning method is efficient and effective. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
55. Binary Tree Support Vector Machine Based on Kernel Fisher Discriminant for Multi-classification.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Liu, Bo, Yang, Xiaowei, and Hao, Zhifeng
- Abstract
In order to improve the accuracy of the conventional algorithms for multi-classifications, we propose a binary tree support vector machine based on Kernel Fisher Discriminant in this paper. To examine the training accuracy and the generalization performance of the proposed algorithm, One-against-All, One-against-One and the proposed algorithms are applied to five UCI data sets. The experimental results show that in general, the training and the testing accuracy of the proposed algorithm is the best one, and there exist no unclassifiable regions in the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
56. Multi-scale Support Vector Machine for Regression Estimation.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yang, Zhen, Guo, Jun, Xu, Weiran, Nie, Xiangfei, Wang, Jian, and Lei, Jianjun
- Abstract
Recently, SVMs are wildly applied to regression estimation, but the existing algorithms leave the choice of the kernel type and kernel parameters to the user. This is the main reason for regression performance degradation, especially for the complicated data even the nonlinear and non-stationary data. By introducing the ‘empirical mode decomposition (EMD)' method, with which any complicated data set can be decomposed into a finite and often small number of ‘intrinsic mode functions' (IMFs) based on the local characteristic time scale of the data, this paper propose an important extension to the SVM method: multi-scale support vector machine based on EMD, in which several kernels of different scales can be used simultaneously to approximate the target function in different scales. Experiment results demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
57. A Multiresolution Wavelet Kernel for Support Vector Regression.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Han, Feng-Qing, Wang, Da-Cheng, Li, Chuan-Dong, and Liao, Xiao-Feng
- Abstract
In this paper a multiresolution wavelet kernel function (MWKF) is proposed for support vector regression. It is different from traditional SVR that the process of reducing dimension is utilized before increasing dimension. The nonlinear mapping Φ(x) from the input space S to the feature space has explicit expression based on dimensionality reduction and wavelet multiresolution analysis. This wavelet kernel function can be represented by inner product. This method guarantee that quadratic program of support vector regression has feasible solution and need not parameter selecting in kernel function. Numerical experiments demonstrate the effectiveness of this method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
58. Mutual Conversion of Regression and Classification Based on Least Squares Support Vector Machines.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Jiang, Jing-Qing, Song, Chu-Yi, Wu, Chun-Guo, Liang, Yang-Chun, Yang, Xiao-Wei, and Hao, Zhi-Feng
- Abstract
Classification and regression are most interesting problems in the fields of pattern recognition. The regression problem can be changed into binary classification problem and least squares support vector machine can be used to solve the classification problem. The optimal hyperplane is the regression function. In this paper, a one-step method is presented to deal with the multi-category problem. The proposed method converts the problem of classification into the function regression problem and is applied to solve the converted problem by least squares support vector machines. The novel method classifies the samples in all categories simultaneously only by solving a set of linear equations. Demonstrations of numerical experiments are performed and good performances are obtained. Simulation results show that the regression and classification can be converted each other based on least squares support vector machines. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
59. A Fast and Sparse Implementation of Multiclass Kernel Perceptron Algorithm.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, and Xu, Jianhua
- Abstract
Original multiclass kernel perceptron algorithm is time consuming in its training and discriminating procedures. In this paper, for each class its reduced kernel-based discriminant function is defined only by training samples from this class itself and a bias term, which means that except for bias terms the number of variables to be solved is always equal to the number of total training samples regardless of class number and the final discriminant functions are sparse. Such a strategy can speed up the training and discriminating procedures effectively. Further an additional iterative procedure with a decreasing learning rate is designed to improve the classification accuracy for the nonlinearly separable case. The experimental results on five benchmark datasets using ten-fold cross validation show that our modified training methods run at least two times and at most five times as fast as original algorithm does. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
60. Fuzzy Rule Extraction Using Robust Particle Swarm Optimization.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Mukhopadhyay, Sumitra, and Mandal, Ajit K.
- Abstract
Automatic fuzzy rule extraction assumes the realization of fuzzy if-then rules using a pre-assigned structure rather than an optimal one. In this paper, Particle Swarm Optimization (PSO) is used to simultaneously evolve the structure and the parameters of the fuzzy rule base. However, the existing PSO based adaptation employs randomness, which makes the rate of convergence dependent on the initial states and the end result can not be reproduced repeatedly with a pre-assigned value of iterations. The algorithm has been modified by removing the randomness in parameter learning, making it very robust. The scheme provides the flexibility in extracting the optimal set of fuzzy rules for a prescribed residual error in function approximation and prediction. Simulation studies and the comprehensive analysis demonstrate that an efficient learning technique as well as the structure development of the fuzzy system, can be achieved by the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
61. Support Vector Machines Ensemble Based on Fuzzy Integral for Classification.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yan, Genting, Ma, Guangfu, and Zhu, Liangkuan
- Abstract
Support vector machines (SVMs) ensemble has been proposed to improve classification performance recently. However, currently used fusion strategies do not evaluate the importance degree of the output of individual component SVM classifier when combining the component predictions to the final decision. A SVMs ensemble method based on fuzzy integral is presented in this paper to deal with this problem. This method aggregates the outputs of separate component SVMs with importance of each component SVM, which is subjectively assigned as the nature of fuzzy logic. The simulating results demonstrate that the proposed method outperforms a single SVM and traditional SVMs aggregation technique via majority voting in terms of classification accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
62. SVMV - A Novel Algorithm for the Visualization of SVM Classification Results.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Wang, Xiaohong, Wu, Sitao, Wang, Xiaoru, and Li, Qunzhan
- Abstract
In this paper, a novel algorithm, called support vector machine visualization (SVMV), is proposed. The SVMV algorithm is based on support vector machine (SVM) and self-organizing mapping (SOM). High dimensional data and binary classification results can be visualized in a low dimensional space. Compared with other traditional visualization algorithms like SOM and Sammon's mapping algorithm, the SVMV algorithm can deliver better visualization on classification results. Experimental results corroborate the effectiveness and usefulness of SVMV. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
63. Fuzzy Support Vector Machines Based on Spherical Regions.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Liu, Hong-Bing, Xiong, Sheng-Wu, and Niu, Xiao-Xiao
- Abstract
Fuzzy Support Vector Machines (FSVMs) based on spherical regions are proposed in this paper. Firstly, the center of the spherite is determined by all the training data. Secondly, the membership functions are defined with the distances between each data and the center of the spherite. Thirdly, using the suitable parameter λ, FSVMs are formed on the spherical regions. One-against-one decision strategy of FSVMs is adopted so that the proposed FSVMs can be extended to solve multi-class problems. In order to verify the superiority of the proposed FSVMs, the traditional two-class and multi-class problems of machine learning benchmark datasets are used to test the feasibility and performance of the proposed FSVMs. The experiment results indicate that the new approach not only has higher precision but also downsizes the number of training data and reduces the running time. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
64. Building Support Vector Machine Alternative Using Algorithms of Computational Geometry.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Bundzel, Marek, Kasanický, Tomáš, and Frankovič, Baltazár
- Abstract
The task of pattern recognition is a task of division of a feature space into regions separating the training examples belonging to different classes. Support Vector Machines (SVM) identify the most borderline examples called support vectors and use them to determine discrimination hyperplanes (hyper-curves). In this paper a pattern recognition method is proposed which represents an alternative to SVM algorithm. Support vectors are identified using selected methods of computational geometry in the original space of features i.e. not in the transformed space determined partially by the kernel function of SVM. The proposed algorithm enables usage of kernel functions. The separation task is reduced to a search for an optimal separating hyperplane or a Winner Takes All (WTA) principle is applied. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
65. Cooperative Clustering for Training SVMs.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Tian, Shengfeng, Mu, Shaomin, and Yin, Chuanhuan
- Abstract
Support vector machines are currently very popular approaches to supervised learning. Unfortunately, the computational load for training and classification procedures increases drastically with size of the training data set. In this paper, a method called cooperative clustering is proposed. With this procedure, the set of data points with pre-determined size near the border of two classes is determined. This small set of data points is taken as the set of support vectors. The training of support vector machine is performed on this set of data points. With this approach, training efficiency and classification efficiency are achieved with small effects on generalization performance. This approach can also be used to reduce the number of support vectors in regression problems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
66. A Smoothing Multiple Support Vector Machine Model.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Jin, Huihong, Meng, Zhiqing, and Ning, Xuanxi
- Abstract
In this paper, we study a smoothing multiple support vector machine (SVM) by using exact penalty function. First, we formulate the optimization problem of multiple SVM as an unconstrained and nonsmooth optimization problem via exact penalty function. Then, we propose a two-order differentiable function to approximately smooth the exact penalty function, and get an unconstrained and smooth optimization problem. By error analysis, we can get approximate solution of multiple SVM by solving its approximately smooth penalty optimization problem without constraint. Finally, we give a corporate culture model by using multiple SVM as a factual example. Compared with artificial neural network, the precision of our smoothing multiple SVM which is illustrated with the numerical experiment is better. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
67. Genetic Granular Kernel Methods for Cyclooxygenase-2 Inhibitor Activity Comparison.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Jin, Bo, and Zhang, Yan-Qing
- Abstract
How to design powerful and flexible kernels to improve the system performance is an important topic in kernel based classification. In this paper, we present a new granular kernel method to improve the performance of Support Vector Machines (SVMs). In the system, genetic algorithms (GAs) are used to generate feature granules and optimize them together with fusions and parameters of granular kernels. The new granular kernel method is used for cyclooxygenase-2 inhibitor activity comparison. Experimental results show that the new method can achieve better performance than SVMs with traditional RBF kernels in terms of prediction accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
68. Passivity Analysis of Dynamic Neural Networks with Different Time-Scales.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Sandoval, Alejandro Cruz, and Yu, Wen
- Abstract
Dynamic neural networks with different time-scales include the aspects of fast and slow phenomenons. Some applications require that the equilibrium points of the designed network be stable. In this paper, the passivity-based approach is used to derive stability conditions for dynamic neural networks with different time-scales. Several stability properties, such as passivity, asymptotic stability, input-to-state stability and bounded input bounded output stability, are guaranteed in certain senses. Numerical examples are also given to demonstrate the effectiveness of the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
69. A Kernel Optimization Method Based on the Localized Kernel Fisher Criterion.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Chen, Bo, Liu, Hongwei, and Bao, Zheng
- Abstract
It is wildly recognized that whether the selected kernel matches the data controls the performance of kernel-based methods. Ideally it is expected that the data is linearly separable in the kernel induced feature space, therefore, Fisher linear discriminant criterion can be used as a kernel optimization rule. However, the data may not be linearly separable even after kernel transformation in many applications, a nonlinear classifier is preferred in this case, and obviously the Fisher criterion is not the best choice as a kernel optimization rule. Motivated by this issue, in this paper we present a novel kernel optimization method by maximizing the local class linear separability in kernel space to increase the local margins between embedded classes via localized kernel Fisher criterion, by which the classification performance of nonlinear classifier in the kernel induced feature space can be improved. Extensive experiments are carried out to evaluate the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
70. Flexible Neural Tree for Pattern Recognition.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Li, Hai-Jun, Wang, Zheng-Xuan, Wang, Li-Min, and Yuan, Sen-Miao
- Abstract
This paper presents a novel induction model named Flexible Neural Tree (FNT) for pattern recognition. FNT uses decision tree to do basic analysis and neural network to do subsequent quantitative analysis. The Pure Information Gain I(Xi;ϑ), which is defined as test selection measure for FNT to construct decision tree, can be used to handle continuous attributes directly. When the information embodied by neural network node can show new attribute relations, FNT extracts symbolic rules from neural network to increase the performance of decision process. Experimental studies on a set of natural domains show that FNT has clear advantages with respect to the generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
71. SLIT: Designing Complexity Penalty for Classification and Regression Trees Using the SRM Principle.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yang, Zhou, Zhu, Wenjie, and Ji, Liang
- Abstract
The statistical learning theory has formulated the Structural Risk Minimization (SRM) principle, based upon the functional form of risk bound on the generalization performance of a learning machine. This paper addresses the application of this formula, which is equivalent to a complexity penalty, to model selection tasks for decision trees, whereas the quantization of the machine capacity for decision trees is estimated using an empirical approach. Experimental results show that, for either classification or regression problems, this novel strategy of decision tree pruning performs better than alternative methods. We name classification and regression trees pruned by virtue of this methodology as Statistical Learning Intelligent Trees (SLIT). [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
72. A Goal Programming Based Approach for Hidden Targets in Layer-by-Layer Algorithm of Multilayer Perceptron Classifiers.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Li, Yanlai, Wang, Kuanquan, and Li, Tao
- Abstract
Layer-by-layer (LBL) algorithm is one of the famous training algorithms for multilayer perceptrons. It converges fast with less computation complexity. Unfortunately, in LBL, when calculating the desired hidden targets, solving of a linear equation set is needed. If the determinant of the coefficient matrix turns to be zero, the solution will not be unique. That results in the stalling problem. Furthermore, a truncation error will be caused by the inversing process of sigmoid function. Based on the idea of goal programming technique, this paper proposes a new method to calculate the hidden targets. A satisfied solution of hidden targets is provided through a goal programming model. Furthermore, the truncation error can be avoided efficiently by means of assigning higher priority to the limitation of variable domain. The effectiveness of the proposed method is demonstrated by the computer simulation of a mushroom classification problem. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
73. Neuron Selection for RBF Neural Network Classifier Based on Multiple Granularities Immune Network.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Zhong, Jiang, Ye, Chun Xiao, Feng, Yong, Zhou, Ying, and Wu, Zhong Fu
- Abstract
The central problem in training a radial basis function neural network is the selection of hidden layer neurons, which includes the selection of the center and width of those neurons. In this paper, we propose to select hidden layer neurons based on multiple granularities immune network. Firstly a multiple granularities immune network (MGIN) algorithm is employed to reduce the data and get the candidate hidden neurons and construct an original RBF network including all candidate neurons. Secondly, the removing redundant neurons procedure is used to get a smaller network. Some experimental results show that the network obtained tends to generalize well. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
74. A Quantitative Comparison of Different MLP Activation Functions in Classification.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, and Shenouda, Emad A. M. Andrews
- Abstract
Multilayer perceptrons (MLP) has been proven to be very successful in many applications including classification. The activation function is the source of the MLP power. Careful selection of the activation function has a huge impact on the network performance. This paper gives a quantitative comparison of the four most commonly used activation functions, including the Gaussian RBF network, over ten real different datasets. Results show that the sigmoid activation function substantially outperforms the other activation functions. Also, using only the needed number of hidden units in the MLP, we improved its conversion time to be competitive with the RBF networks most of the time. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
75. An Adaptive Network Topology for Classification.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Xiong, Qingyu, Huang, Jian, Xian, Xiaodong, and Xiao, Qian
- Abstract
Constructive learning algorithms have been proved to be powerful methods for training feedforward neural networks. In this paper, we present an adaptive network topology with constructive learning algorithm. It consists of SOM and RBF networks as a basic network and a cluster network respectively. The SOM network performs unsupervised learning to locate SOM output cells at suitable position in the input space. And also the weight vectors belonging to its output cells are transmitted to the hidden cells in the RBF network as the centers of RBF activation functions. As a result, the one to one correspondence relationship is produced between the output cells of SOM and the hidden cells of RBF network. The RBF network performs supervised training using delta rule. The output errors of the RBF network are used to determine where to insert a new SOM cell according to a rule. This also makes it possible to let the RBF cells grow while the SOM output cells increasing, until a performance criterion is fulfilled or until a desired network size is obtained. The simulation results for the two-spirals benchmark are shown that the proposed adaptive network structure can get good performance and generalization results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
76. Associative Memories Based on Discrete-Time Cellular Neural Networks with One-Dimensional Space-Invariant Templates.
- Author
-
Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Zeng, Zhigang, and Wang, Jun
- Abstract
In this paper, discrete-time cellular neural networks with one-dimensional space invariant are designed to associative memories. The obtained results enable both heteroassociative and autoassociative memories to be synthesized by assuring the global asymptotic stability of the equilibrium point and the feeding data via external inputs rather than initial conditions. It is shown that criteria herein can ensure the designed input matrix to be obtained by using one-dimensional space-invariant cloning template. Finally, one specific example is included to demonstrate the applicability of the methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
77. Alpha-Beta Associative Memories for Gray Level Patterns.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yáñez-Márquez, Cornelio, Sánchez-Fernández, Luis P., and López-Yáñez, Itzamá
- Abstract
In this paper, we show how the binary Alpha-Beta associative memories, created and developed by Yáñez-Márquez, and introduced in [1-3], can be used to operate with gray level patterns (namely gray-level images), improving the results presented by Sossa et. al. in [4]. To achieve our goal, given a fundamental set of gray-level patterns, we find the binary representation of each entry, then we build a binary Alpha-Beta associative memory. After that, a given gray level pattern or a distorted version of it is recalled by converting its entries to a binary representation, then recalling it with the binary associative memory, and finally converting again this binary output pattern into a gray level pattern. Experimental results show the efficiency of the new memories. It is important to point out that this solution is more simple and elegant than that of the presented in [4]. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
78. Impacts of Perturbations of Training Patterns on Two Fuzzy Associative Memories Based on T-Norms.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Xu, Wei-Hong, Chen, Guo-Ping, and Xie, Zhong-Ke
- Abstract
In general, there is perturbation between collected training pattern and its corresponding actual pattern in real world, such perturbation may cause disadvantage to performance of a fuzzy neural network, therefore a type of robustness of fuzzy associative memories (FAMs) is proposed correlative with the perturbations of training patterns in the paper, then it is pointed out that using the maximum-weight-matrix learning algorithm, a Max-T0 FAM has poor such robustness, however, a Max-TL FAM holds good robustness, where the two FAMs are based on t-norm T0 and Lukasiewicz t-norm, respectively. Finally, a simulation experiment validates our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
79. Simulated Annealing Based Learning Approach for the Design of Cascade Architectures of Fuzzy Neural Networks.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Han, Chang-Wook, and Park, Jung-Il
- Abstract
This paper is concerned with the optimization method of the cascade architectures of fuzzy neural networks. The structure of the network that deals with a selection of a subset of input variables and their distribution across the individual logic processors (LPs) is optimized with the use of genetic algorithms (GA). We discuss random signal-based learning employing simulated annealing (SARSL), a local search technique, aimed at further refinement of the connections of the neurons (GA-SARSL). A standard data set is discussed with respect to the performance of the constructed networks and their interpretability. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
80. Design of Fuzzy Neural Networks Based on Genetic Fuzzy Granulation and Regression Polynomial Fuzzy Inference.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Oh, Sung-Kwun, Park, Byoung-Jun, and Pedrycz, Witold
- Abstract
In this paper, new architectures and comprehensive design methodologies of Genetic Algorithms (GAs) based Fuzzy Relation-based Fuzzy Neural Networks (FRFNN) are introduced and the dynamic search-based GAs is introduced to lead to rapidly optimal convergence over a limited region or a boundary condition. The proposed FRFNN is based on the Fuzzy Neural Networks (FNN) with the extended structure of fuzzy rules being formed within the networks. In the consequence part of the fuzzy rules, three different forms of the regression polynomials such as constant, linear and modified quadratic are taken into consideration. The structure and parameters of the FRFNN are optimized by the dynamic search-based GAs. The proposed model is contrasted with the performance of conventional FNN models in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
81. Design of Fuzzy Polynomial Neural Networks with the Aid of Genetic Fuzzy Granulation and Its Application to Multi-variable Process System.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Oh, Sung-Kwun, Lee, In-Tae, and Choi, Jeoung-Nae
- Abstract
In this paper, we propose a new architecture of Fuzzy Polynomial Neural Networks (FPNN) by means of genetically optimized Fuzzy Polynomial Neuron (FPN) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially Genetic Algorithms (GAs). The conventional FPNNs developed so far are based on mechanisms of self-organization and evolutionary optimization. The proposed FPNN gives rise to a structurally optimized network and comes with a substantial level of flexibility in comparison to the one we encounter in conventional FPNNs. It is shown that the proposed genetic algorithms-based Fuzzy Polynomial Neural Networks is more useful and effective than the existing models for nonlinear process. We experimented with Medical Imaging System (MIS) dataset to evaluate the performance of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
82. A New Design Methodology of Fuzzy Set-Based Polynomial Neural Networks with Symbolic Gene Type Genetic Algorithms.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Roh, Seok-Beom, Oh, Sung-Kwun, and Ahn, Tae-Chon
- Abstract
In this paper, we propose a new design methodology of fuzzy-neural networks - Fuzzy Set-based Polynomial Neural Networks (FSPNN) with symbolic genetic algorithms. We have developed a design methodology (genetic optimization using Symbolic Genetic Algorithms) to find the optimal structure for fuzzy-neural networks that expanded from Group Method of Data Handling (GMDH). It is the number of input variables, the order of the polynomial, the number of membership functions, and a collection of the specific subset of input variables that are the parameters of FSPNN fixed by aid of symbolic genetic optimization that has search capability to find the optimal solution on the solution space. The augmented and genetically developed FPNN (gFPNN) results in a structurally optimized structure and comes with a higher level of flexibility in comparison to the one we encounter in the conventional FPNNs. The GA-based design procedure being applied at each layer of FPNN leads to the selection of the most suitable nodes (or FSPNs) available within the FPNN. Symbolic genetic algorithms are capable of reducing the solution space more than conventional genetic algorithms with binary genetype chromosomes. The performance of genetically optimized FSPNN (gFSPNN) with aid of symbolic genetic algorithms is quantified through experimentation where we use a number of modeling benchmarks data which are already experimented with in fuzzy or neurofuzzy modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
83. A Novel Elliptical Basis Function Neural Networks Optimized by Particle Swarm Optimization.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Du, Ji-Xiang, Zhai, Chuan-Min, Wang, Zeng-Fu, and Zhang, Guo-Jun
- Abstract
In this paper, a novel model of elliptical basis function neural networks (EBFNN) is proposed. Firstly, a geometry analytic algorithm is applied to construct the hyper-ellipsoid units of hidden layer of the EBFNN, i.e., an initial structure of the EBFNN, which is further pruned by the particle swarm optimization (PSO) algorithm. Finally, the experimental results demonstrated the proposed hybrid optimization algorithm for the EBFNN model is feasible and efficient, and the EBFNN is not only parsimonious but also has better generalization performance than the RBFNN. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
84. A Genetic Algorithm with Modified Tournament Selection and Efficient Deterministic Mutation for Evolving Neural Network.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Kim, Dong-Sun, Kim, Hyun-Sik, and Chung, Duck-Jin
- Abstract
In this paper, we present a genetic algorithm (GA) based on tournament selection (TS) and deterministic mutation (DM) to evolve neural network systems. We use population diversity to determine the mutation probability for sustaining the convergence capacity and preventing local optimum problem of GA. In addition, we consider population that have a worst fitness and best fitness value for tournament selection to fast convergence. Experimental results with mathematical problems and pattern recognition problem show that the proposed method enhance the convergence capacity about 34.5% and reduce computation power about 40% compared with the conventional method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
85. Growing Hierarchical Principal Components Analysis Self-Organizing Map.
- Author
-
Wang, Jun, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Zhang, Stones Lei, Yi, Zhang, and Lv, Jian Cheng
- Abstract
In this paper, we propose a new self-growing hierarchical principal components analysis self-organizing neural networks model. This dynamically growing model expands the ability of the PCASOM model that represents the hierarchical structure of the input data. It overcomes the shortcoming of the PCASOM model in which the fixed the network architecture must be defined prior to training. Experiment results showed that the proposed model has better performance in the tradition clustering problem. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
86. Building Multi-layer Small World Neural Network.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Yang, Shuzhong, Luo, Siwei, and Li, Jianyu
- Abstract
Selecting rational structure is a crucial problem on multi-layer neural network in application. In this paper a novel method is presented to solve this problem. The method breaks through the traditional methods which only determine the hidden structure and also learns the topological connectivity so that the connectivity structure has small world characteristic. The experiments show that the learned small world neural network using our method reduces the learning error and learning time but improves the generalization when compared to the networks of regular connectivity. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
87. Heterogeneous Centroid Neural Networks.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Park, Dong-Chul, Nguyen, Duc-Hoai, Lee, Song-Jae, and Lee, Yunsik
- Abstract
The Tied Mixture Hidden Markov Model (TMHMM) is an important approach to reduce the number of free parameters in speech recognition. However, this model suffers from degradation in recognition accuracy due to its Gaussian Probability Density Function (GPDF) clustering error. This paper proposes a clustering algorithm called a Heterogeneous Centroid Neural Network (HCNN) for use in TMHMMs. The algorithm utilizes a Centroid Neural Network (CNN) to cluster acoustic feature vectors in the TMHMM. The HCNN uses a heterogeneous distance measure to allocate more code vectors in the heterogeneous areas where probability densities of different states overlap each other. When applied to an isolated Korean digit word recognition problem, the HCNN reduces the error rate by 9.39% over CNN clustering, and 14.63% over the traditional K-means clustering. Keywords: speech recognition, tied mixture, unsupervised clustering, Hidden Markov Model. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
88. Stochastic Time-Varying Competitive Neural Network Systems.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Shen, Yi, Liu, Meiqin, and Xu, Xiaodong
- Abstract
In this paper we reveal that the environmental noise will suppress a potential population explosion in the stochastic competitive neural network systems with variable delay. To reveal these interesting facts, we stochastically perturb the competitive neural network systems with variable delay $\dot{x}(t)=\textrm{diag}(x_1(t),\dots,x_n(t))[b+Ax(t-\delta(t))]$ into the Itô form $\textrm{d}x(t)=\textrm{diag}(x_1(t),\dots,x_n(t))[b+Ax(t-\delta(t))]\textrm{d}t+\sigma x(t)\textrm{d}w(t),$ and show that although the solution to the original delay systems may explode to infinity in a finite time, with probability one that of the associated stochastic delay systems do not. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
89. Research on Multi-Degree-of-Freedom Neurons with Weighted Graphs.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Wang, Shoujue, Liu, Singsing, and Cao, Wenming
- Abstract
In this paper, we redefine the sample points set in the feature space from the point of view of weighted graph and propose a new covering model — Multi-Degree-of-Freedom Neurons (MDFN). Base on this model, we describe a geometric learning algorithm with 3-degree-of-freedom neurons. It identifies the sample points set's topological character in the feature space, which is different from the traditional "separation" method. Experiment results demonstrates the general superiority of this algorithm over the traditional PCA+NN algorithm in terms of efficiency and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
90. A New Genetic Approach to Structure Learning of Bayesian Networks.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Lee, Jaehun, Chung, Wooyong, and Kim, Euntai
- Abstract
In this paper, a new approach to structure learning of Bayesian networks (BNs) based on genetic algorithm is proposed. The proposed method explores the wider solution space than the previous method. In the previous method, while the ordering among the nodes of the BNs was fixed their conditional dependencies represented by the connectivity matrix was learned, whereas, in the proposed method, the ordering as well as the conditional dependency among the BN nodes is learned. To implement this method using the genetic algorithm, we represent an individual of the population as a pair of chromosomes: The first one represents the ordering among the BN nodes and the second one represents their conditional dependencies. To implement proposed method new crossover and mutation operations which are closed in the set of the admissible individuals are introduced. Finally, a computer simulation exploits the real-world data and demonstrates the performance of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
91. A Gradient-Based ELM Algorithm in Regressing Multi-variable Functions.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, and Xu, You
- Abstract
A new off-line learning algorithm for single layer feed-forward neural networks (SLFNs) called Extreme Learning Machine (ELM) was introduced by Huang et al. [1, 2, 3, 4]. ELM is not as the same as traditional BP method as it can achieve good generalization performance at an extremely fast learning speed. In ELM, the hidden neuron parameters (the input weights and hidden biases or the RBF centers and impact factors) were pre-determined randomly so a set of non-optimized parameters might avoid ELM to achieve the global minimum in some applications. This paper tries to find a set of optimized value of input weights using gradient-based algorithm in training SLFN where the activation function is differentiable. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
92. Evolutionary Extreme Learning Machine - Based on Particle Swarm Optimization.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Xu, You, and Shu, Yang
- Abstract
A new off-line learning method of single-hidden layer feed-forward neural networks (SLFN) called Extreme Learning Machine (ELM) was introduced by Huang et al. [1, 2, 3, 4] . ELM is not the same as traditional BP methods as it can achieve good generalization performance at extremely fast learning speed. In ELM, the hidden neuron parameters (the input weights and hidden biases or the RBF centers and impact factors) were pre-assigned randomly so there may be a set of non-optimized parameters that avoid ELM achieving global minimum in some applications. Adopting the ideas in [5] that a single layer feed-forward neural network can be trained using a hybrid approach which takes advantages of both ELM and the evolutionary algorithm, this paper introduces a new kind of evolutionary algorithm called particle swarm optimization (PSO) which can train the network more suitable for some prediction problems using the ideas of ELM. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
93. Robust Recursive Complex Extreme Learning Machine Algorithm for Finite Numerical Precision.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Lim, Junseok, Sung, Koeng Mo, and Song, Joonil
- Abstract
Recently, a new learning algorithm for single-hidden-layer feedforward neural network (SLFN) named the complex extreme learning machine (C-ELM) has been proposed in [1]. In this paper, we propose a numerically robust recursive least square type C-ELM algorithm. The proposed algorithm improves the performance of C-ELM especially in finite numerical precision. The computer simulation results in the various precision cases show the proposed algorithm improves the numerical robustness of C-ELM. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
94. A New Learning Algorithm for Function Approximation Incorporating A Priori Information into Extreme Learning Machine.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Han, Fei, Lok, Tat-Ming, and Lyu, Michael R.
- Abstract
In this paper, a new algorithm for function approximation is proposed to obtain better generalization performance and faster convergent rate. The new algorithm incorporates the architectural constraints from a priori information of the function approximation problem into Extreme Learning Machine. On one hand, according to Taylor theorem, the activation functions of the hidden neurons in this algorithm are polynomial functions. On the other hand, Extreme Learning Machine is adopted which analytically determines the output weights of single-hidden layer FNN. In theory, the new algorithm tends to provide the best generalization at extremely fast learning speed. Finally, several experimental results are given to verify the efficiency and effectiveness of our proposed learning algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
95. A Fuzzy Neural Networks with Structure Learning.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Lin, Haisheng, Gao, Xiao Zhi, Huang, Xianlin, and Song, Zhuoyue
- Abstract
This paper presents a novel clustering algorithm for the structure learning of fuzzy neural networks. Our novel clustering algorithm uses the reward and penalty mechanism for the adaptation of the fuzzy neural networks prototypes for every training sample. This new clustering algorithm can on-line partition the input data, pointwise update the clusters, and self-organize the fuzzy neural structure. No prior knowledge of the input data distribution is needed for initialization. All rules are self-created, and they automatically grow with more incoming data. Our learning algorithm shows that supervised clustering algorithms can be used for the structure learning for the on-line self-organizing fuzzy neural networks. The control of the inverted pendulum is finally used to demonstrate the effectiveness of our learning algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
96. Q Learning Based on Self-organizing Fuzzy Radial Basis Function Network.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Wang, Xuesong, Cheng, Yuhu, and Sun, Wei
- Abstract
A fuzzy Q learning based on a self-organizing fuzzy radial basis function (FRBF) network is proposed to solve the ‘curse of dimensionality' problem caused by state space generalization in the paper. A FRBF network is used to represent continuous action and the corresponding Q value. The interpolation technique is adopted to represent the appropriate utility value for the wining local action of every fuzzy rule. Neurons can be organized by the FRBF network itself. The methods of the structure and parameter learning, based on new adding and merging neurons techniques and a gradient descent algorithm, are simple and effective, with a high accuracy and a compact structure. Simulation results on balancing control of inverted pendulum illustrate the performance and applicability of the proposed fuzzy Q learning scheme to real-world problems with continuous states and continuous actions. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
97. Q-Learning with FCMAC in Multi-agent Cooperation.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Hwang, Kao-Shing, Chen, Yu-Jen, and Lin, Tzung-Feng
- Abstract
In general, Q-learning needs well-defined quantized state spaces and action spaces to obtain an optimal policy for accomplishing a given task. This makes it difficult to be applied to real robot tasks because of poor performance of learned behavior due to the failure of quantization of continuous state and action spaces. In this paper, we proposed a fuzzy-based CMAC method to calculate the contribution of each neighboring state to generate a continuous action value in order to make motion smooth and effective. A momentum term to speed up training has been designed and implemented in a multi-agent system for real robot applications. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
98. Improved Learning Algorithm Based on Generalized SOM for Dynamic Non-linear System.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Zhang, Kai, Guan, Gen-Zhi, Chen, Fang-Fang, Zhang, Lin, and Du, Zhi-Ye
- Abstract
This paper proposes an improved learning algorithm based on generalized SOM for dynamical non-linear system identification. To improve the convergent speed and the accuracy of SOM algorithm, we propose the improved self-organizing algorithm, which, at first, applies the multiple local models instead of the global model, and secondly, adjusts the weights of the computing output layers along with the weights of the competing neurons layer during the training process. We prove that the improved algorithm is convergent if the network has suitable initial weights and small positive real parameters. The simulation results using our improved generalized SOM show an improvement for non-linear system compared to traditional neural network control systems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
99. Robust Learning by Self-organization of Nonlinear Lines of Attractions.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Seow, Ming-Jung, and Asari, Vijayan K.
- Abstract
A mathematical model for learning a nonlinear line of attractions is presented in this paper. This model encapsulates attractive fixed points scattered in the state space representing patterns with similar characteristics as an attractive line. The dynamics of this nonlinear line attractor network is designed to operate between stable and unstable states. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilized stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can helps to create complex dynamics in an unsupervised manner. Experiments performed on CMU face expression database shows that the proposed model can perform pattern association and pattern classification tasks with few iterations and great accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
100. Training RBF Neural Network with Hybrid Particle Swarm Optimization.
- Author
-
Wang, Jun, Yi, Zhang, Zurada, Jacek M., Lu, Bao-Liang, Yin, Hujun, Gao, Haichang, Feng, Boqin, Hou, Yun, and Zhu, Li
- Abstract
The particle swarm optimization (PSO) has been used to train neural networks. But the particles collapse so quickly that it exits a potentially dangerous stagnation characteristic, which would make it impossible to arrive at the global optimum. In this paper, a hybrid PSO with simulated annealing and Chaos search technique (HPSO) is adopted to solve this problem. The HPSO is proposed to train radial basis function (RBF) neural network. Benchmark function optimization and dataset classification problems (Iris, Glass, Wine and New-thyroid) experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.