27 results on '"Ludermir, Teresa B."'
Search Results
2. Implementing Any Nonlinear Quantum Neuron.
- Author
-
de Paula Neto, Fernando M., Ludermir, Teresa B., de Oliveira, Wilson R., and da Silva, Adenilton J.
- Subjects
- *
QUANTUM operators , *LINEAR operators , *CIRCUIT complexity , *NONLINEAR functions , *ARTIFICIAL neural networks - Abstract
The ability of artificial neural networks (ANNs) to adapt to input data and perform generalizations is intimately connected to the use of nonlinear activation and propagation functions. Quantum versions of ANN have been proposed to take advantage of the possible supremacy of quantum over classical computing. To date, all proposals faced the difficulty of implementing nonlinear activation functions since quantum operators are linear. This brief presents an architecture to simulate the computation of an arbitrary nonlinear function as a quantum circuit. This computation is performed on the phase of an adequately designed quantum state, and quantum phase estimation recovers the result, given a fixed precision, in a circuit with linear complexity in function of ANN input size. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. Editorial: A Successful Year and Looking Forward to 2017 and Beyond.
- Author
-
He, Haibo, Haas, Robert, Fu, Jun, Hammer, Barbara, Ho, Daniel W. C., Karray, Fakhri, Kudithipudi, Dhireesha, Lozano, Jose A., Ludermir, Teresa B., Mandziuk, Jacek, Melacci, Stefano, Paiva, Antonio, Qiao, Hong, Rakotomamonjy, Alain, Sun, Shiliang, Suykens, Johan A. K., and Wang, Meng
- Subjects
ARTIFICIAL neural networks ,PERIODICALS ,ELECTRONIC data processing ,ARTIFICIAL intelligence - Abstract
This issue marks the first anniversary issue since I was honored to serve as the Editor-in-Chief (EiC) of the IEEE Transactions on Neural Networks and Learning Systems (TNNLS). I am happy to report that we had a very successful year and here are a few highlights that I would like to share with the community.
The latest impact factor of TNNLS is 4.854 according to the Journal Citation Reports. This marks a record high impact factor for our journal and places TNNLS as the number one scholarly publication in Computer Science (Hardware & Architecture), number three in Computer Science (Theory & Methods), and number ten in Electrical and Electronic Engineering journals. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
4. Clustering and selection of neural networks using adaptive differential evolution.
- Author
-
de Lima, Tiago P. F., da Silva, Adenilton J., and Ludermir, Teresa B.
- Abstract
This paper explores the automatic construction of multiple classifiers systems using the selection method. The automatic method proposed is composed by two phases: one for designing the individual classifiers and one for clustering patterns of training set and search specialized classifiers for each cluster found. The performed experiments adopted the artificial neural networks in the classification phase and k-means in clustering phase. Adaptive differential evolution has been used in this work in order to optimize the parameters and performance of the different techniques used in classification and clustering phases. The experimental results have shown that the proposed method has better performance than manual methods and significantly outperforms most of the methods commonly used to combine multiple classifiers using the fusion version for a set of ?? benchmark problems. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
5. Evolving Neural Networks Using Differential Evolution with Neighborhood-Based Mutation and Simple Subpopulation Scheme.
- Author
-
Mineu, Nicole L., da Silva, Adenilton J., and Ludermir, Teresa B.
- Abstract
This paper presents a method to search for near optimal neural networks. The proposed method combines Differential Evolution with Global and Local Neighborhood (DEGL) evolutionary algorithm and the multimodal technique Simple Subpopulation Scheme (SSS). The performance of the proposed method is investigated through experiments on six machine learning benchmarks for classification problems. The proposed method is competitive when compared to other methods of literature. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
6. An automatic method for construction of ensembles to time series prediction.
- Author
-
Lima, Tiago P.F. and Ludermir, Teresa B.
- Subjects
- *
TIME series analysis , *DOCUMENT clustering , *SELF-organizing maps , *ARTIFICIAL neural networks , *DIFFERENTIAL evolution - Abstract
We present here a work that applies an automatic construction of ensembles based on the Clustering and Selection (CS) algorithm for time series forecasting. The automatic method, called CSELM, initially finds an optimum number of clusters for training data set and subsequently designates an Extreme Learning Machine (ELM) for each cluster found. For model evaluation, the testing data set are submitted to clustering technique and the nearest cluster to data input will give a supervised response through its associated ELM. Self-organizing maps were used in the clustering phase. Adaptive differential evolution was used to optimize the parameters and performance of the different techniques used in the clustering and prediction phases. The results obtained with the CSELM method are compared with results obtained by other methods in the literature. Five well-known time series were used to validate CSELM. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
7. Optimization of the weights and asymmetric activation function family of neural network for time series forecasting.
- Author
-
Gomes, Gecynalda S. da S. and Ludermir, Teresa B.
- Subjects
- *
MATHEMATICAL optimization , *ARTIFICIAL neural networks , *TIME series analysis , *MATHEMATICAL functions , *SIMULATION methods & models , *TABU search algorithm , *COMPUTER systems , *ARTIFICIAL intelligence - Abstract
Highlights: [•] We present a method for optimization of the activation functions for ANN. [•] The proposed optimization method uses Simulated Annealing and Tabu Search. [•] The proposed method is good for forecasting distinct time series. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
8. Particle Swarm Optimization of MLP for the identification of factors related to Common Mental Disorders
- Author
-
Ludermir, Teresa B. and de Oliveira, Wilson R.
- Subjects
- *
PARTICLE swarm optimization , *PSYCHIATRIC diagnosis , *SOCIAL classes , *ARTIFICIAL neural networks , *GENERALIZATION , *COMPUTER architecture - Abstract
Abstract: Social class differences in the prevalence of Common Mental Disorder (CMD) are likely to vary according to time, culture and stage of economic development. The present study aimed to investigate the use of optimization of architecture and weights of Artificial Neural Network (ANN) for identification of the factors related to CMDs. The identification of the factors was possible by optimizing the architecture and weights of the network. The optimization of architecture and weights of ANNs is based on Particle Swarm Optimization with early stopping criteria. This approach achieved a good generalization control, as well as similar or better results than other techniques, but with a lower computational cost, with the ability to generate small networks and with the advantage of the automated architecture selection, which simplify the training process. This paper presents the results obtained in the experiments with ANNs in which it was observed an average percentage of correct classification of individuals with positive diagnostic for the CMDs of 90.59%. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
9. An approach to reservoir computing design and training
- Author
-
Ferreira, Aida A., Ludermir, Teresa B., and de Aquino, Ronaldo R.B.
- Subjects
- *
ARTIFICIAL neural networks , *DYNAMICAL systems , *COMPARATIVE studies , *GENETIC algorithms , *COMPUTATIONAL complexity , *NONLINEAR systems , *TIME series analysis , *EVOLUTIONARY algorithms - Abstract
Abstract: Reservoir computing is a framework for computation like a recurrent neural network that allows for the black box modeling of dynamical systems. In contrast to other recurrent neural network approaches, reservoir computing does not train the input and internal weights of the network, only the readout is trained. However it is necessary to adjust parameters to create a “good” reservoir for a given application. In this study we introduce a method, called RCDESIGN (reservoir computing and design training). RCDESIGN combines an evolutionary algorithm with reservoir computing and simultaneously looks for the best values of parameters, topology and weight matrices without rescaling the reservoir matrix by the spectral radius. The idea of adjust the spectral radius within the unit circle in the complex plane comes from the linear system theory. However, this argument does not necessarily apply to nonlinear systems, which is the case of reservoir computing. The results obtained with the proposed method are compared with results obtained by a genetic algorithm search for global parameters generation of reservoir computing. Four time series were used to validate RCDESIGN. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
10. Selecting variables with search algorithms and neural networks to improve the process of time series forecasting.
- Author
-
Ludermir, Teresa B., Valença, Ivna, Lucas, Tarcísio, and Valença, Mêuser
- Subjects
- *
SEARCH algorithms , *ARTIFICIAL neural networks , *TIME series analysis , *STOCHASTIC processes , *PREDICTION models - Abstract
A time series is a sequence of observations of a random variable. Hence, it is a stochastic process. Forecasting time series data is important component of operations research because these data often provide the foundation for decision models. This models are used to predict data points before they are measured based on known past events. Researches in this subject have been done in many areas like economy, energy production, ecology and others. To improve the process of time series forecasting it is important to identify which of past values will be considered to be used in the models by eliminating redundant or irrelevant attributes. Two hybrid systems Harmony Search with Neural Networks (HS) and Temporal Memory Search with Neural Networks (TMS) are improved and a new one is proposed: the Temporal Memory Search Limited with Neural Networks (TMSL). The performance of the techniques is investigated through an empirical evaluation on twenty real-world time series. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
11. Hybrid Training Method for MLP: Optimization of Architecture and Training.
- Author
-
Zanchettin, Cleber, Ludermir, Teresa B., and Almeida, Leandro Maciel
- Subjects
- *
ARTIFICIAL neural networks , *PERCEPTRONS , *GENETIC algorithms , *SIMULATED annealing , *MATHEMATICAL optimization , *BACK propagation , *MACHINE learning , *CELLS - Abstract
The performance of an artificial neural network (ANN) depends upon the selection of proper connection weights, network architecture, and cost function during network training. This paper presents a hybrid approach (GaTSa) to optimize the performance of the ANN in terms of architecture and weights. GaTSa is an extension of a previous method (TSa) proposed by the authors. GaTSa is based on the integration of the heuristic simulated annealing (SA), tabu search (TS), genetic algorithms (GA), and backpropagation, whereas TSa does not use GA. The main advantages of GaTSa are the following: a constructive process to add new nodes in the architecture based on GA, the ability to escape from local minima with uphill moves (SA feature), and faster convergence by the evaluation of a set of solutions (TS feature). The performance of GaTSa is investigated through an empirical evaluation of 11 public-domain data sets using different cost functions in the simultaneous optimization of the multilayer perceptron ANN architecture and weights. Experiments demonstrated that GaTSa can also be used for relevant feature selection. GaTSa presented statistically relevant results in comparison with other global and local optimization techniques. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
12. DESIGN OF EXPERIMENTS IN NEURO-FUZZY SYSTEMS.
- Author
-
ZANCHETTIN, CLEBER, MINKU, LEANDRO L., and LUDERMIR, TERESA B.
- Subjects
FUZZY systems ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FUZZY logic ,MATHEMATICAL statistics ,MATHEMATICAL analysis - Abstract
Interest in hybrid methods that combine artificial neural networks and fuzzy inference systems has grown in recent years. These systems are robust solutions that search for representations of domain knowledge, reasoning on uncertainty, automatic learning and adaptation. However, the design and definition of the parameter effectiveness of such systems is still a hard task. In the present work, we perform a statistical analysis to verify interactions and interrelations between parameters in the design of neuro-fuzzy systems. The analysis is carried out using a powerful statistical tool, namely, Design of Experiments (DOE), in two neuro-fuzzy models — Adaptive Neuro Fuzzy Inference System (ANFIS) and Evolving Fuzzy Neural Networks (EFuNN). The results show that, for ANFIS, input MFs number and output MFs shape are usually the factors with the largest influence on the system's RMSE. For EFFuNN, the MF shape and the interaction between MF shape and number usually have the largest effect size. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
13. A multi-objective memetic and hybrid methodology for optimizing the parameters and performance of artificial neural networks
- Author
-
Almeida, Leandro M. and Ludermir, Teresa B.
- Subjects
- *
ARTIFICIAL neural networks , *PARTICLE swarm optimization , *MATHEMATICAL optimization , *GENETIC algorithms , *MEMETICS , *EVOLUTIONARY computation - Abstract
Abstract: The use of artificial neural networks implies considerable time spent choosing a set of parameters that contribute toward improving the final performance. Initial weights, the amount of hidden nodes and layers, training algorithm rates and transfer functions are normally selected through a manual process of trial-and-error that often fails to find the best possible set of neural network parameters for a specific problem. This paper proposes an automatic search methodology for the optimization of the parameters and performance of neural networks relying on use of Evolution Strategies, Particle Swarm Optimization and concepts from Genetic Algorithms corresponding to the hybrid and global search module. There is also a module that refers to local searches, including the well-known Multilayer Perceptrons, Back-propagation and the Levenberg–Marquardt training algorithms. The methodology proposed here performs the search using the aforementioned parameters in an attempt to optimize the networks and performance. Experiments were performed and the results proved the proposed method to be better than trial-and-error and other methods found in the literature. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
14. Clustering and co-evolution to construct neural network ensembles: An experimental study
- Author
-
Minku, Fernanda L. and Ludermir, Teresa B.
- Subjects
- *
EVOLUTIONARY computation , *ARTIFICIAL neural networks , *CLUSTER analysis (Statistics) , *ALGORITHMS , *ARTIFICIAL intelligence , *EXPERIMENTS , *DISTRIBUTED computing - Abstract
Abstract: This paper introduces an approach called Clustering and Co-evolution to Construct Neural Network Ensembles (CONE). This approach creates neural network ensembles in an innovative way, by explicitly partitioning the input space through a clustering method. The clustering method allows a reduction in the number of nodes of the neural networks that compose the ensemble, thus reducing the execution time of the learning process. This is an important characteristic especially when evolutionary algorithms are used. The clustering method also ensures that different neural networks specialize in different regions of the input space, working in a divide-and-conquer way, to maintain and improve the accuracy. Besides, the clustering method facilitates the understanding of the system and makes a straightforward distributed implementation possible. The experiments performed with seven classification databases and three different co-evolutionary algorithms show that CONE considerably reduces the execution time without prejudicing (and even improving) the accuracy, even when a distributed implementation is not used. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
15. An Optimization Methodology for Neural Network Weights and Architectures.
- Author
-
Ludermir, Teresa B., Yamazaki, Akio, and Zanchettin, Cleber
- Subjects
- *
ARTIFICIAL neural networks , *PERCEPTRONS , *BACK propagation , *SIMULATED annealing , *COMBINATORIAL optimization , *MATHEMATICAL optimization - Abstract
This paper introduces a methodology for neural net- work global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
16. Equivalence Between RAM-Based Neural Networks and Probabilistic Automata.
- Author
-
de Souto, Marcilio C. P., Ludermir, Teresa B., and de Oliveira, Wilson R.
- Subjects
- *
ARTIFICIAL neural networks , *RANDOM access memory , *PROBABILISTIC automata , *ALGORITHMS , *ALGEBRA , *ARTIFICIAL intelligence - Abstract
In this letter, the computational power of a class of random access memory (RAM)-based neural networks, called general single-layer sequential weightless neural networks (GSSWNNs), is analyzed. The theoretical results presented, besides helping the understanding of the temporal behavior of these networks, could also provide useful insights for the developing of new learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
17. HYBRID NEURAL SYSTEMS FOR PATTERN RECOGNITION IN ARTIFICIAL NOSES.
- Author
-
Zanchettin, Cleber and Ludermir, Teresa B.
- Subjects
- *
NOSE , *ARTIFICIAL organs , *ARTIFICIAL neural networks , *PATTERN perception , *ARTIFICIAL intelligence , *NEURAL circuitry , *SENSE organs , *PROSTHETICS - Abstract
This work examines the use of Hybrid Intelligent Systems in the pattern recognition system of an artificial nose. The connectionist approaches Multi-Layer Perceptron and Time Delay Neural Networks, and the hybrid approaches Feature-Weighted Detector and Evolving Neural Fuzzy Networks were investigated. A Wavelet Filter is evaluated as a preprocessing method for odor signals. The signals generated by an artificial nose were composed by an array of conducting polymer sensors and exposed to two different odor databases. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
18. Neural Network Training with Global Optimization Techniques.
- Author
-
Yamazaki, Akio and Ludermir, Teresa B.
- Subjects
- *
SIMULATED annealing , *ARTIFICIAL neural networks , *ALGORITHMS - Abstract
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
19. Turing's analysis of computation and artificial neural networks.
- Author
-
de Oliveira, Wilson R., de Souto, Marcílio C. P., and Ludermir, Teresa B.
- Subjects
TURING machines ,ARTIFICIAL neural networks ,ARTIFICIAL intelligence ,MACHINE theory ,ALGORITHMS - Abstract
A novel way to simulate Turing Machines (TMs) by Artificial Neural Networks (ANNs) is proposed. We claim that the proposed simulation is in agreement with the correct interpretation of Turing's analysis of computation; compatible with the current approaches to analyze cognition as an interactive agent-environment process; and physically realizable since it does not use connection weights with unbounded precision. A full description of an implementation of a universal TM into a recurrent sigmoid ANN focusing on the TM finite state control is given, leaving the tape, an infinite resource, as an external non-intrinsic feature. Also, motivated by the results on the limit of what can actually be computed by ANNs when noise is taken into account, we introduce the notion of Definite Turing Machine and investigate some of its properties. [ABSTRACT FROM AUTHOR]
- Published
- 2002
20. Sequential RAM-based Neural Networks: Learnability, Generalisation, Knowledge Extraction, and Grammatical Inference.
- Author
-
Souto, Marcílio C.P.De, Adeodato, Paulo J. L., and Ludermir, Teresa B.
- Subjects
ARTIFICIAL neural networks ,RANDOM access memory ,ARTIFICIAL intelligence - Abstract
A fundamental question in the field of artificial neural networks is what set of problems a given class of networks can perform (computability). Such a problem can be made less general, but no less important, by asking what these networks could learn by using a given training procedure (learnability). The basic purpose of this paper is to address the learnability problem. Specifically, it analyses the learnability of sequential RAM-based neural networks. The analytical tools used are those of Automata Theory. In this context, this paper establishes which class of problems and under what conditions such networks, together with their existing learning rules, can learn and generalise. This analysis also yields techniques for both extracting knowledge from and inserting knowledge into the networks. The results presented here, besides helping in a better understanding of the temporal behaviour of sequential RAM-based networks, could also provide useful insights for the integration of the symbolic/connectionist paradigms. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
21. Introduction by Guest Editors.
- Author
-
Ludermir, Teresa B. and de Souto, Marcilio C. P.
- Subjects
- *
ARTIFICIAL neural networks , *CONFERENCES & conventions - Abstract
Cites several abstracts on neural systems submitted at the VIIth Brazilian Symposium on Artificial Neural Networks held in 2002 at Porto De Galinhas-Pernambuco. Ways on how to deal with non-orthogonal signals by Allan Kardex Barros, Andrzej Cichocki and Noboru Ohnishi; Neuro-symbolic language for monotonic and non-monotonic parallel logical inference by means of artificial neural networks; Performance of fuzzy combination schemes.
- Published
- 2003
- Full Text
- View/download PDF
22. Progress in intelligent systems design.
- Author
-
Prudêncio, Ricardo B.C. and Ludermir, Teresa B.
- Subjects
- *
ARTIFICIAL intelligence , *MACHINE learning , *ARTIFICIAL neural networks , *COMPUTATIONAL intelligence , *PATTERN recognition systems - Published
- 2016
- Full Text
- View/download PDF
23. The VIIth Brazilian Symposium on Neural Networks (SBRN'02).
- Author
-
Ludermir, Teresa B. and de Souto, Marcílio C.P.
- Subjects
- *
CONFERENCES & conventions , *ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *FUZZY systems - Abstract
Presents information on the papers selected from the VIIth Brazilian Symposium on Neural Networks proceedings, held in 2002. Presentation of a novel class of recurrent neural fuzzy networks; Proposed constructive and pruning methods; Investigation of artificial neural networks.
- Published
- 2002
24. Classical and superposed learning for quantum weightless neural networks
- Author
-
da Silva, Adenilton J., de Oliveira, Wilson R., and Ludermir, Teresa B.
- Subjects
- *
ARTIFICIAL neural networks , *MACHINE learning , *ALGORITHMS , *NEURONS , *POLYNOMIALS , *INFORMATION theory - Abstract
Abstract: A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
25. Forecasting models for interval-valued time series
- Author
-
Maia, André Luis S., de Carvalho, Francisco de A.T., and Ludermir, Teresa B.
- Subjects
- *
BOX-Jenkins forecasting , *ARTIFICIAL neural networks , *HYBRID systems , *TIME series analysis , *FORECASTING , *MONTE Carlo method - Abstract
Abstract: This paper presents approaches to interval-valued time series forecasting. The first and second approaches are based on the autoregressive (AR) and autoregressive integrated moving average (ARIMA) models, respectively. The third approach is based on an artificial neural network (ANN) model and the last is based on a hybrid methodology that combines both ARIMA and ANN models. Each approach fits, respectively, two models on the mid-point and range of the interval values assumed by the interval-valued time series in the learning set. The forecasting of the lower and upper bounds of the interval value of the time series is accomplished through a combination of forecasts from the mid-point and range of the interval values. The evaluation of the models presented is based on the estimation of the average behavior of the mean absolute error and mean squared error in the framework of a Monte Carlo experiment. The results demonstrate that the approaches are useful in forecasting alternatives for interval-valued time series and indicate that the hybrid model is an effective way to improve the forecasting accuracy achieved by any one of the models separately. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
26. BRACIS 2015: Progress in computation intelligence in Brazil.
- Author
-
Pappa, Gisele Lobo, Revoredo, Kate Cerqueira, and Ludermir, Teresa B.
- Subjects
- *
ARTIFICIAL intelligence , *CONFERENCES & conventions , *COMPUTATIONAL intelligence , *ARTIFICIAL neural networks , *PROBLEM solving , *QUANTUM computing - Published
- 2017
- Full Text
- View/download PDF
27. An efficient static gesture recognizer embedded system based on ELM pattern recognition algorithm.
- Author
-
Cambuim, Lucas F.S., Macieira, Rafael M., Neto, Fernando M.P., Barros, Edna, Ludermir, Teresa B., and Zanchettin, Cleber
- Subjects
- *
EMBEDDED computer systems , *PATTERN recognition systems , *ALGORITHMS , *MACHINE learning , *COMPUTER vision , *FIELD programmable gate arrays , *ARTIFICIAL neural networks - Abstract
Millions of people throughout the world describe themselves as being deaf. Some of them suffer from severe hearing loss and consequently use an alternative manner with which to communicate with society by means of either written or visual language. There are several sign languages capable of dealing with such a need. Nonetheless, a communication gap still exists even when using such languages, since only a small fraction of the population is able to use them. Over the last few years, due to the increasing need for universal accessibility when using computational resources, gesture recognition has been widely researched. Thus, in an attempt to reduce this communication gap, our approach proposes a computational solution in order to translate static gesture symbols into text symbols, through computer vision, without the use of hand sensors or gloves. In order to guarantee the highest quality, with emphasis on the reliability of the system and real-time translation, we have developed an approach based on the Extreme Learning Machine (ELM) pattern recognition algorithms fully implemented in hardware, and have assessed it to measure these two metrics. Hardware components were designed in order to perform the best image processing and pattern recognition tasks used within the project. As a case study, and so as to validate the technique, a recognition system for the Brazilian Sign Language (LIBRAS) was implemented. Besides ensuring that this approach could be used for any static hand gesture symbol recognition, our main goal was to guarantee fast, reliable gesture recognition for communication between humans. Experimental results have demonstrated that the system is able to recognize LIBRAS symbols with an accuracy of 97%, a response time of 6.5ms per letter recognition, and using only 43% (about 64,851 logic elements) of the FPGA area. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.