840 results on '"learning algorithm"'
Search Results
802. ANALYSIS OF THE SIMULATED ANNEALING METHOD IN CLASSIC BOLTZMANN MACHINES
- Author
-
Pēteris Grabusts
- Subjects
Thermal equilibrium ,Restricted Boltzmann machine ,Mathematical optimization ,Artificial neural network ,Computer science ,Boltzmann machine ,Adaptive simulated annealing ,Hopfield network ,Recurrent networks ,learning algorithm ,simulated annealing ,symbols.namesake ,Boltzmann constant ,Simulated annealing ,symbols ,Statistical physics - Abstract
The paper analyses a model of a neural net proposed by Hinton et al (1985). They have added noise to a Hopfield net and have called it Boltzmann machine (BM) drawing an analogy with the behaviour of physical systems with noises. The concept of simulated annealing is analysed. The experiment aimed at testing the state of thermal equilibrium for a Boltzmann net with three neurons, specified threshold values and weights at two different temperatures, T=1 and T=0,25, is described.
- Published
- 1997
- Full Text
- View/download PDF
803. Neural networks for process identification.
- Author
-
Haesloop, D. and Holt, B.R.
- Abstract
The application of neural networks to the development of dynamic models is considered. In particular, the authors present a common layered structure used for backward error propagation that is modified by the addition of direct linear connections between the input and output layers. For problems which have a significant linear component, such as those posed by process identification, this neural network structure offers significant promise. The neural network can be initialized in a meaningful fashion using the linear formation. Compared to standard neural network structures, the network can learn faster, can extrapolate better, and can be used to provide information on the extent of nonlinearities of the problem and on the learning algorithm itself [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
804. Unsupervising adaption neural-network control.
- Author
-
Wang, G.-J. and Miu, D.K.
- Abstract
Unsupervising learning control systems based on neural networks are discussed. The tasks are carried out by two neural networks which act as the plant identifier and system controller, respectively. A novel learning algorithm that can adapt the controller's control action by using information stores in the identifying network has been developed. This learning control system can learn without supervising to perform the dynamic control of a difficult-learning control problem such as the inverted pendulum. Robustness can be seen from its ability to adapt large parameter changes and from its high fault tolerance. Simulation results are encouraging [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
805. Linear discriminants, logic functions, backpropagation, and improved convergence.
- Author
-
Yang, H. and Guest, C.C.
- Abstract
A modified learning algorithm for BP (backpropagation) neural networks is presented based on the interpretation of a neuron in an (N+1)-dimensional space, RN+1, and the analysis of how a multilayered network performs a classification task by collective use of the B (boundary) neurons in the first layer. The role of B neurons and L (logic) neurons is discussed. A B neuron represents a linear boundary in the input space RN. An L neuron in the second layer defines in RN a convex piecewise linear boundary that is formed by a set of line segments corresponding to connected B neurons. An L neuron in the third layer defines in RN a complicated boundary that can have both convex parts and concave parts. Each subboundary of it corresponds to an L neuron in the second layer. The nonlinear function does not change the structure of a boundary but smooths the angular vertices of a boundary. It is shown that for the two-class problems with convex boundaries, the nonlinear function in an L neuron can be replaced with a step and the weights of the L neuron can be fixed. Computer simulation shows that this algorithm can learn more quickly than the ordinary BP algorithm, even in cases where the ordinary BP algorithm will fail [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
806. Learning by parallel forward propagation.
- Author
-
Abe, S.
- Abstract
The back-propagation algorithm is widely used for learning weights of multilayered neural networks. The major drawbacks however, are the slow convergence and lack of a proper way to set the number of hidden neurons. The author proposes a learning algorithm which solves the above problems. The weights between two layers are successively calculated, fixing other weights so that the error function, which is the square sum of the difference between the training data and the network outputs is minimised. Since the calculation results in solving a set of linearized equations, redundancy of the hidden neurons is judged by the singularity of the corresponding coefficient matrix. For the exclusive-OR and parity check circuits, excellence convergence characteristics are obtained, and the redundancy of the hidden neurons is checked by the singularity of the matrix [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
807. Classification of large set of handwritten characters using modified back propagation model.
- Author
-
Krzyzak, A., Dai, W., and Suen, C.Y.
- Abstract
A novel recognition system has been implemented to solve the difficult problem of handwritten numeral recognition. In this system, the Fourier descriptors are used as dominant features, and a modified backpropagation model is applied to classification. A novel backpropagation learning algorithm has been developed, and its performance has been evaluated. The results show that the learning algorithm is superior to the original backpropagation model. The proposed algorithm was able to solve the nonconvergence problem typically occurring with the standard backpropagation approach. The algorithm has been tested on handwritten numerals collected by the US Post Office [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
808. Logistic regression and the Boltzmann machine.
- Author
-
DeStefano, J.J.
- Abstract
A derivation of the learning algorithm for the Boltzmann machine is presented. It uses a statistical tool called logistic regression, in which the connection strengths in the Boltzmann machine correspond to the parameters of the logistic model. The use of maximum-likelihood estimates for the parameters leads to the standard learning algorithm for the Boltzmann machine and may be easily extended to N-way connections. This formulation makes explicit the contribution of higher-order connections and has sparked research into analysis of the tradeoff between their increased learning power and the increased number of connections they require [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
809. Convergence of Kohonen's learning vector quantization.
- Author
-
Baras, J.S. and LaVigna, A.
- Abstract
It is shown that the learning vector quantization (LVQ) algorithm (T. Kohonen, 1986), converges to locally asymptotic stable equilibria of an ordinary differential equation. It is shown that the learning algorithm performs stochastic approximation. Convergence of the vectors is guaranteed under the appropriate conditions on the underlying statistics of the classification problem. Also presented is a modification to the learning algorithm which results in more robust convergence. With this modification, it is possible to show that as the appropriate parameters go to infinity, the decision regions associated with the modified LVQ algorithm approach the Bayesian optimal [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
810. Nonlinear dynamic system identification using artificial neural networks (ANNs).
- Author
-
Fernandez, B., Parlos, A.G., and Tsai, W.K.
- Abstract
A recurrent multilayer perceptron (MLP) network topology is used in the identification of nonlinear dynamic systems from only the input/output measurements. This effort is part of a research program devoted to developing real-time diagnostics and predictive control techniques for large-scale complex nonlinear dynamic systems. The identification is performed in the discrete-time domain, with the learning algorithm being a modified form of the back-propagation (BP) rule. The recurrent dynamic network (RDN) developed is used for the identification of a simple power plant boiler with known nonlinear behavior. Results indicate that the RDN can reproduce the nonlinear response of the boiler while keeping the number of nodes roughly equal to the relative order of the system. A number of issues are identified regarding the behavior of the RDN which are unresolved and require further research. Use of the recurrent MLP structure with a variety of different learning algorithms may prove useful in utilizing artificial neural networks for recognition, classification, and prediction of dynamic patterns [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
811. Neural network data fusion concepts and application.
- Author
-
Eggers, M. and Khuon, T.
- Abstract
A neural network data fusion decision system and the accompanying experimental results obtained from joint sensor data are presented. The decisions include detection and correct classification of space object maneuvers simultaneously observed by two radars of different aspect, frequency, and resolution. The system consists of a statistically-based adaptive preprocessor for each sensor, followed by a highly parallel neural network for associating the preprocessor outputs with the appropriate decisions. The preprocessing approach, supported by a signal decomposition theorem, recursively models the detrended sensor data as an autoregressive process of sufficiently high order. This approach also accommodates nonstationary data by incorporating an information-theoretic transition detector which identifies the segments of near-stationary data. Together, feature vectors are produced over near-stationary segments of data which are scale invariant, translation invariant, and normalized and represent sufficient statistics. Subsequently, the feature vectors arising from the sensor preprocessors are collectively associated with the correct output decision. The association is conducted by a multilayer perceptron neural network associative memory employing a modified learning algorithm which converges at a rate comparable to that of conventional algorithms, yet requires less computation [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
812. Fully distributed diagnosis by PDP learning algorithm: towards immune network PDP model.
- Author
-
Ishida, Y.
- Abstract
Based on the strong analogy between neural networks and distributed diagnosis models, diagnostic algorithms are presented which are similar to the learning algorithm used in neural networks. Diagnostic implications of convergence theorems proved by the Lyapunov function are also discussed. Regarding diagnosis process as a recalling process in the associative memory, a diagnostic method of associative diagnosis is also presented. A good guess of diagnosis is given as a key to recalling the correct diagnosis. The authors regard the distributed diagnosis as an immune network model, a novel PDP (parallel distributed processing) model. This models the recognition capability emergent from cooperative recognition of interconnected units [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
813. Silicon compiler for neuro-ASICs.
- Author
-
Ouali, J. and Saucier, G.
- Abstract
A distributed, synchronous architecture for artificial neural networks is proposed. A basic processor is associated to a neuron and is able to perform autonomously all the steps of the learning and the relaxation phases. Data circulation is implemented by shifting techniques. Customization of the network is done by setting identification data in dedicated memory elements. The neuron has been implemented on silicon. It is shown that, in a silicon compiler environment, dedicated networks can be easily generated by cascading these elementary blocks [ABSTRACT FROM PUBLISHER]
- Published
- 1990
- Full Text
- View/download PDF
814. A Stable Online Algorithm for Energy-Efficient Multiuser Scheduling.
- Author
-
Salodkar, Nitin, Karandikar, Abhay, and Borkar, Vivek S.
- Subjects
ALGORITHMS ,MULTIUSER detection (Telecommunication) ,RADIO transmitter fading ,MARKOV processes ,COMPUTER scheduling ,ENERGY consumption ,IEEE 802.16 (Standard) - Abstract
In this paper, we consider the problem of energy-efficient uplink scheduling with delay constraint for a multiuser wireless system. We address this problem within the framework of constrained Markov decision processes (CMDPs) wherein one seeks to minimize one cost (average power) subject to a hard constraint on another (average delay). We do not assume the arrival and channel statistics to be known. To handle state-space explosion and informational constraints, we split the problem into individual CMDPs for the users, coupled through their Lagrange multipliers; and a user selection problem at the base station. To address the issue of unknown channel and arrival statistics, we propose a reinforcement learning algorithm. The users use this learning algorithm to determine the rate at which they wish to transmit in a slot and communicate this to the base station. The base station then schedules the user with the highest rate in a slot. We analyze convergence, stability, and optimality properties of the algorithm. We also demonstrate the efficacy of the algorithm through simulations within IEEE 802.16 system. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
815. Learning Outcome-Discriminative Dynamics in Multivariate Physiological Cohort Time Series
- Author
-
Nemati, Shamim, Lehman, Li-wei H., and Adams, Ryan Prescott
- Subjects
biomedical monitoring ,heart rate ,heuristic algorithms ,prediction algorithms ,switches ,time series analysis ,SLDS ,artifactual segment ,blood pressure ,classification over method ,expectation maximization ,feature learning ,learning algorithm ,learning outcome-discriminative dynamics ,measurement artifact ,model identification ,multivariate physiological cohort time series ,physiologically-constrained dynamical model ,postural change decoding ,state-space formulation ,switching linear dynamical system framework ,time series dynamics ,cardiology ,expectation-maximisation algorithm ,learning systems ,medical signal processing ,multivariable systems ,physiology ,signal classification ,time series - Abstract
Model identification for physiological systems is complicated by changes between operating regimes and measurement artifacts. We present a solution to these problems by assuming that a cohort of physiological time series is generated by switching among a finite collection of physiologically-constrained dynamical models and artifactual segments. We model the resulting time series using the switching linear dynamical systems (SLDS) framework, and present a novel learning algorithm for the class of SLDS, with the objective of identifying time series dynamics that are predictive of physiological regimes or outcomes of interest. We present exploratory results based on a simulation study and a physiological classification example of decoding postural changes from heart rate and blood pressure. We demonstrate a significant improvement in classification over methods based on feature learning via expectation maximization. The proposed learning algorithm is general, and can be extended to other applications involving state-space formulations., Engineering and Applied Sciences
- Published
- 2013
- Full Text
- View/download PDF
816. Predictive modelling of the LD50 activities of coumarin derivatives using neural statistical approaches: Electronic descriptor-based DFT
- Author
-
Tahar Lakhlifi, Majdouline Larif, Mohammed Bouachrine, Samir Chtita, Azeddine Adad, and Rachid Hmamouchi
- Subjects
0301 basic medicine ,Quantitative structure–activity relationship ,Mean squared error ,Correlation coefficient ,Machine learning ,computer.software_genre ,01 natural sciences ,MLR ,Set (abstract data type) ,03 medical and health sciences ,Linear regression ,lcsh:Science (General) ,Learning algorithm ,Artificial neural network ,Predicted ,Chemistry ,business.industry ,QSAR ,0104 chemical sciences ,Levenberg–Marquardt algorithm ,010404 medicinal & biomolecular chemistry ,030104 developmental biology ,Levenberg–Marquardt ,Artificial intelligence ,Biological system ,business ,ANN ,computer ,Predictive modelling ,lcsh:Q1-390 - Abstract
A study of structure–activity relationship (QSAR) was performed on a set of 30 coumarin-based molecules. This study was performed using multiple linear regressions (MLRs) and an artificial neural network (ANN). The predicted values of the antioxidant activities of coumarins were in good agreement with the experimental results. Several statistical criteria, such as the mean square error (MSE) and the correlation coefficient (R), were studied to evaluate the developed models. The best results were obtained with a network architecture [8-4-1] (R = 0.908, MSE = 0.032), activation functions (tansig–purelin) and the Levenberg–Marquardt learning algorithm. The model proposed in this study consists of large electronic descriptors that are used to describe these molecules. The results suggested that the proposed combination of calculated parameters may be useful for predicting the antioxidant activities of coumarin derivatives.
- Full Text
- View/download PDF
817. Arm Exoskeleton for Rehabilitation Following Stroke by Learning Algorithm Prediction
- Author
-
Sukarnur Che Abdullah, M. Hanif M. Ramli, and Mohd Amiruddin Fikri
- Subjects
Rehabilitation ,Computer science ,medicine.medical_treatment ,stroke ,Exoskeleton ,rehabilitation ,learning algorithm ,medicine ,General Earth and Planetary Sciences ,arm exoskeleton ,Robotic arm ,Algorithm ,Simulation ,General Environmental Science - Abstract
Stroke is a major cause of disability in worldwide and also one of the causes of death after coronary heart disease. Many devices had been designed for hand motor function rehabilitation that a stroke survivor can use for bilateral movement practice. This paper presents an arm motor function rehabilitation device where it is designed to predict the position angle for the robotic arm. MATLAB software is used for real-time positioning that can be developed by SIMULINK block diagram and proof by the simulator in program code in order for devising to operate under the position demand. All the angular motions or feedback to the simulation mode from the attached optical encoders via the Data Acquisition Card (DAQ). The learning algorithm can directly determine the position of its joint and can therefore completely eliminate the need for any system modelling. The robotic arm shows a successful implementation of the learning algorithm in predicting the behavior for arm exoskeleton.
- Full Text
- View/download PDF
818. An application of an associative learning model to a Morris pool with a single landmark
- Author
-
Miquel Noguera, Victoria D. Chamizo, José Luis Díaz-Barrero, Miquel Grau-Sánchez, and T. Rodrigo
- Subjects
Landmark ,business.industry ,Computer science ,Latencies ,Associative learning ,McLaren model ,Computational Mathematics ,Saliences ,Computational Theory and Mathematics ,Salience (neuroscience) ,Modelling and Simulation ,Modeling and Simulation ,Artificial intelligence ,business ,Learning algorithm - Abstract
A simplified model that studies stimuli representation and the set of algorithms that let us analyze associative learning in some particular cases with predetermined values of the salience of the stimuli is presented. We simulate an experiment where rats were trained in a Morris pool to find a hidden platform in the presence of a single landmark. The results obtained agree with a previous study where it was found that the control acquired by a single landmark is different depending on its relative distance from the hidden platform. In this paper, some simplified equations of the associative learning model have been used.
- Full Text
- View/download PDF
819. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System
- Author
-
Majid Ahmadi, Javad Haddadnia, and Karim Faez
- Subjects
Zernike polynomials ,Computer science ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,lcsh:TK7800-8360 ,02 engineering and technology ,pseudo-Zernike moment ,Facial recognition system ,lcsh:Telecommunication ,symbols.namesake ,Digital image ,lcsh:TK5101-6720 ,0202 electrical engineering, electronic engineering, information engineering ,Three-dimensional face recognition ,Radial basis function ,Computer vision ,Invariant (mathematics) ,Electrical and Electronic Engineering ,moment invariant ,Artificial neural network ,business.industry ,lcsh:Electronics ,020206 networking & telecommunications ,Pattern recognition ,face localization ,human face recognition ,RBF neural network ,learning algorithm ,ComputingMethodologies_PATTERNRECOGNITION ,Hardware and Architecture ,Signal Processing ,symbols ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF) neural network with a hybrid learning algorithm (HLA) has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT) is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI) with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR) of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL) indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.
- Full Text
- View/download PDF
820. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Hrubý, Martin, and Peringer, Petr
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
821. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Hrubý, Martin, and Peringer, Petr
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
822. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Hrubý, Martin, and Peringer, Petr
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
823. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Hrubý, Martin, and Peringer, Petr
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
824. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Bokiš, Daniel, Hrubý, Martin, Peringer, Petr, and Bokiš, Daniel
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
825. Optimalizační algoritmy v logistických kombinatorických úlohách
- Author
-
Hrubý, Martin, Peringer, Petr, Bokiš, Daniel, Hrubý, Martin, Peringer, Petr, and Bokiš, Daniel
- Abstract
Tato práce se zabývá optimalizačními problémy a především logistickou úlohou Vehicle Routing Problem (VRP). V první části je zaveden pojem optimalizace a jsou představeny nejdůležitější optimalizační problémy. Dále jsou v práci uvedeny metody, kterými je možné tyto problémy řešit. Následně jsou vybrané metody aplikovány na problém VRP a jsou uvedena některá jejich vylepšení. Práce také představuje metodu využívání znalostí předchozích řešení, tedy formu učícího algoritmu. V závěru práce jsou experimentálně optimalizovány parametry jednotlivých metod a ověřen přínos představených vylepšení., This thesis deals with optimization problems with main focus on logistic Vehicle Routing Problem (VRP). In the first part term optimization is established and most important optimization problems are presented. Next section deals with methods, which are capable of solving those problems. Furthermore it is explored how to apply those methods to specific VRP, along with presenting some enhancement of those algorithms. This thesis also introduces learning method capable of using knowledge of previous solutions. At the end of the paper, experiments are performed to tune the parameters of used algorithms and to discuss benefit of suggested improvements.
826. Inter-chip communications in an analogue neural network utilising frequency division multiplexing
- Author
-
Craven, Michael P. and Craven, Michael P.
- Abstract
As advances have been made in semiconductor processing technology, the number of transistors on a chip has increased out of step with the number of input/output pins, which has introduced a communications ’bottle-neck’ in the design of computer architectures. This is a major issue in the hardware design of parallel structures implemented in either digital or analogue VLSI, and is particularly relevant to the design of neural networks which need to be highly interconnected. This work reviews hardware implementations of neural networks, with an emphasis on analogue implementations, and proposes a new method for overcoming connectivity constraints, by the use of Frequency Division Multiplexing (FDM) for the inter-chip communications. In this FDM scheme, multiple analogue signals are transmitted between chips on a single wire by modulating them at different frequencies. The main theoretical work examines the number of signals which can be packed into an FDM channel, depending on the quality factors of the filters used for the demultiplexing, and a fractional overlap parameter which was defined to take into account the inevitable overlapping of filter frequency responses. It is seen that by increasing the amount of permissible overlap, it is possible to communicate a larger number of signals in a given bandwidth. Alternatively, the quality factors of the filters can be reduced, which is advantageous for hardware implementation. Therefore, it was found necessary to determine the amount of overlap which might be permissible in a neural network implementation utilising FDM communications. A software simulator is described, which was designed to test the effects of overlap on Multilayer Perceptron neural networks. Results are presented for networks trained with the backpropagation algorithm, and with the alternative weight perturbation algorithm. These were carried out using both floating point and quantised weights to examine the combined effects of overlap and weight quant
827. Maintaining regularity and generalizationin data using the minimum description length principle and genetic algorithm: case of grammatical inference
- Author
-
Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, Kendall, Graham, Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, and Kendall, Graham
- Abstract
In this paper, a genetic algorithm with minimum description length (GAWMDL) is proposed for grammatical inference. The primary challenge of identifying a language of infinite cardinality from a finite set of examples should know when to generalize and specialize the training data. The minimum description length principle that has been incorporated addresses this issue is discussed in this paper. Previously, the e-GRIDS learning model was proposed, which enjoyed the merits of the minimum description length principle, but it is limited to positive examples only. The proposed GAWMDL, which incorporates a traditional genetic algorithm and has a powerful global exploration capability that can exploit an optimum offspring. This is an effective approach to handle a problem which has a large search space such the grammatical inference problem. The computational capability, the genetic algorithm poses is not questionable, but it still suffers from premature convergence mainly arising due to lack of population diversity. The proposed GAWMDL incorporates a bit mask oriented data structure that performs the reproduction operations, creating the mask, then Boolean based procedure is applied to create an offspring in a generative manner. The Boolean based procedure is capable of introducing diversity into the population, hence alleviating premature convergence. The proposed GAWMDL is applied in the context free as well as regular languages of varying complexities. The computational experiments show that the GAWMDL finds an optimal or close-to-optimal grammar. Two fold performance analysis have been performed. First, the GAWMDL has been evaluated against the elite mating pool genetic algorithm which was proposed to introduce diversity and to address premature convergence. GAWMDL is also tested against the improved tabular representation algorithm. In addition, the authors evaluate the performance of the GAWMDL against a genetic algorithm not using the minimum description length pr
- Full Text
- View/download PDF
828. Inter-chip communications in an analogue neural network utilising frequency division multiplexing
- Author
-
Craven, Michael P. and Craven, Michael P.
- Abstract
As advances have been made in semiconductor processing technology, the number of transistors on a chip has increased out of step with the number of input/output pins, which has introduced a communications ’bottle-neck’ in the design of computer architectures. This is a major issue in the hardware design of parallel structures implemented in either digital or analogue VLSI, and is particularly relevant to the design of neural networks which need to be highly interconnected. This work reviews hardware implementations of neural networks, with an emphasis on analogue implementations, and proposes a new method for overcoming connectivity constraints, by the use of Frequency Division Multiplexing (FDM) for the inter-chip communications. In this FDM scheme, multiple analogue signals are transmitted between chips on a single wire by modulating them at different frequencies. The main theoretical work examines the number of signals which can be packed into an FDM channel, depending on the quality factors of the filters used for the demultiplexing, and a fractional overlap parameter which was defined to take into account the inevitable overlapping of filter frequency responses. It is seen that by increasing the amount of permissible overlap, it is possible to communicate a larger number of signals in a given bandwidth. Alternatively, the quality factors of the filters can be reduced, which is advantageous for hardware implementation. Therefore, it was found necessary to determine the amount of overlap which might be permissible in a neural network implementation utilising FDM communications. A software simulator is described, which was designed to test the effects of overlap on Multilayer Perceptron neural networks. Results are presented for networks trained with the backpropagation algorithm, and with the alternative weight perturbation algorithm. These were carried out using both floating point and quantised weights to examine the combined effects of overlap and weight quant
829. Maintaining regularity and generalizationin data using the minimum description length principle and genetic algorithm: case of grammatical inference
- Author
-
Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, Kendall, Graham, Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, and Kendall, Graham
- Abstract
In this paper, a genetic algorithm with minimum description length (GAWMDL) is proposed for grammatical inference. The primary challenge of identifying a language of infinite cardinality from a finite set of examples should know when to generalize and specialize the training data. The minimum description length principle that has been incorporated addresses this issue is discussed in this paper. Previously, the e-GRIDS learning model was proposed, which enjoyed the merits of the minimum description length principle, but it is limited to positive examples only. The proposed GAWMDL, which incorporates a traditional genetic algorithm and has a powerful global exploration capability that can exploit an optimum offspring. This is an effective approach to handle a problem which has a large search space such the grammatical inference problem. The computational capability, the genetic algorithm poses is not questionable, but it still suffers from premature convergence mainly arising due to lack of population diversity. The proposed GAWMDL incorporates a bit mask oriented data structure that performs the reproduction operations, creating the mask, then Boolean based procedure is applied to create an offspring in a generative manner. The Boolean based procedure is capable of introducing diversity into the population, hence alleviating premature convergence. The proposed GAWMDL is applied in the context free as well as regular languages of varying complexities. The computational experiments show that the GAWMDL finds an optimal or close-to-optimal grammar. Two fold performance analysis have been performed. First, the GAWMDL has been evaluated against the elite mating pool genetic algorithm which was proposed to introduce diversity and to address premature convergence. GAWMDL is also tested against the improved tabular representation algorithm. In addition, the authors evaluate the performance of the GAWMDL against a genetic algorithm not using the minimum description length pr
- Full Text
- View/download PDF
830. Maintaining regularity and generalizationin data using the minimum description length principle and genetic algorithm: case of grammatical inference
- Author
-
Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, Kendall, Graham, Pandey, Hari Mohan, Chaudhary, Ankit, Mehrotra, Deepti, and Kendall, Graham
- Abstract
In this paper, a genetic algorithm with minimum description length (GAWMDL) is proposed for grammatical inference. The primary challenge of identifying a language of infinite cardinality from a finite set of examples should know when to generalize and specialize the training data. The minimum description length principle that has been incorporated addresses this issue is discussed in this paper. Previously, the e-GRIDS learning model was proposed, which enjoyed the merits of the minimum description length principle, but it is limited to positive examples only. The proposed GAWMDL, which incorporates a traditional genetic algorithm and has a powerful global exploration capability that can exploit an optimum offspring. This is an effective approach to handle a problem which has a large search space such the grammatical inference problem. The computational capability, the genetic algorithm poses is not questionable, but it still suffers from premature convergence mainly arising due to lack of population diversity. The proposed GAWMDL incorporates a bit mask oriented data structure that performs the reproduction operations, creating the mask, then Boolean based procedure is applied to create an offspring in a generative manner. The Boolean based procedure is capable of introducing diversity into the population, hence alleviating premature convergence. The proposed GAWMDL is applied in the context free as well as regular languages of varying complexities. The computational experiments show that the GAWMDL finds an optimal or close-to-optimal grammar. Two fold performance analysis have been performed. First, the GAWMDL has been evaluated against the elite mating pool genetic algorithm which was proposed to introduce diversity and to address premature convergence. GAWMDL is also tested against the improved tabular representation algorithm. In addition, the authors evaluate the performance of the GAWMDL against a genetic algorithm not using the minimum description length pr
- Full Text
- View/download PDF
831. Bilateral contract prices estimation using a Q-leaming based approach
- Author
-
Jaime Rodriguez-Fernandez, Juan M. Corchado, Isabel Praça, Francisco Silva, Zita Vale, and Tiago Pinto
- Subjects
Decision support system ,Operations research ,Artificial neural network ,Computer science ,business.industry ,Process (engineering) ,negotiation process ,020209 energy ,media_common.quotation_subject ,02 engineering and technology ,bilateral contracts ,Negotiation ,electricity market ,Order (exchange) ,learning algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Electricity market ,020201 artificial intelligence & image processing ,business ,decision support ,Expected utility hypothesis ,Risk management ,media_common - Abstract
The electricity markets restructuring process encouraged the use of computational tools in order to allow the study of different market mechanisms and the relationships between the participating entities. Automated negotiation plays a crucial role in the decision support for energy transactions due to the constant need for players to engage in bilateral negotiations. This paper proposes a methodology to estimate bilateral contract prices, which is essential to support market players in their decisions, enabling adequate risk management of the negotiation process. The proposed approach uses an adaptation of the Q-Learning reinforcement learning algorithm to choose the best from a set of possible contract prices forecasts that are determined using several methods, such as artificial neural networks (ANN), support vector machines (SVM), among others. The learning process assesses the probability of success of each forecasting method, by comparing the expected negotiation price with the historic data contracts of competitor players. The negotiation scenario identified as the most probable scenario that the player will face during the negotiation process is the one that presents the higher expected utility value. This approach allows the supported player to be prepared for the negotiation scenario that is the most likely to represent a reliable approximation of the actual negotiation environment.
- Full Text
- View/download PDF
832. The Forgetting Curve and Learning Algorithms
- Subjects
learning algorithm ,spacing effect ,experiment ,SuperMemo ,forgetting curve ,spaced repetition - Abstract
With the globalization and impact of Internet, a new wave of English as public language is occurring in Japanese corporations. Learning English in an efficient ways is still a challenge. In this manuscript, we introduce the forgetting curve and spacing effect, algorithms in the software SuperMemo. We also propose an experiment project on how to test learning English words based on the spacing effect. A pre-experiment result is also given.
833. The random neural network model on the color pattern recognition problem
- Author
-
Jose Aguilar
- Subjects
Modelo neuronal aleatorio con múltiples clases ,learning algorithm ,color pattern recognition ,lcsh:TA1-2040 ,lcsh:Technology (General) ,proceso de recuperación ,lcsh:T1-995 ,reconocimiento de patrones coloreados ,algoritmo de aprendizaje ,lcsh:Engineering (General). Civil engineering (General) ,retrieval process ,Multiple classes random neural network - Abstract
The purpose of this paper is to describe the use of the multiple classes random neural network model to recognize patterns having different colors. We propose a learning algorithm for the recognition of color patterns based upon the non-linear equations of the multiple classes random neural network model using gradient descent of a quadratic error function. In addition, we propose a progressive retrieval process with adaptive threshold value. The experimental evaluation shows that our approach provides good results.El propósito de este artículo es describir el uso del modelo neuronal aleatorio con múltiples clases para reconocer patrones con diferentes colores. Nosotros proponemos un algoritmo de aprendizaje para el reconocimiento de patrones coloreados basados en la ecuación no lineal del modelo neuronal aleatorio con múltiples clases usando el descenso de gradiente de una función cuadrática de error. Además, proponemos un proceso de recuperación progresiva con un valor de umbral adaptativo. La evaluación experimental muestra que nuestro enfoque da buenos resultados.
834. Domain-specific relation extraction: Using distant supervision machine learning
- Author
-
Giovanni Acampora, Taha Osman, Abduladem Aljamel, Fred A., Filipe J., Liu K., Aveiro D., Dietz J., Filipe J., Aljamel, Abduladem, Osman, Taha, and Acampora, Giovanni
- Subjects
Artificial intelligence ,Information extraction ,Computer science ,Process (engineering) ,Knowledge management ,Machine learning ,computer.software_genre ,Domain (software engineering) ,Extracting information ,Knowledge base ,Knowledge extraction ,Supervised learning, Decision making proce ,Knowledge based system ,Information retrieval ,Relation extraction ,Semantic Web technology ,Supervised machine learning ,Learning algorithm ,Semantic Web ,Knowledge engineering ,Supervised machine learning, Data mining ,business.industry ,Natural language processing ,Knowledge-base ,Machine learning technique ,Relationship extraction ,Information analysi ,Learning system ,Domain knowledge ,business ,Decision making ,computer ,Natural language processing system ,Semantic web - Abstract
The increasing accessibility and availability of online data provides a valuable knowledge source for information analysis and decision-making processes. In this paper we argue that extracting information from this data is better guided by domain knowledge of the targeted use-case and investigate the integration of a knowledge-driven approach with Machine Learning techniques in order to improve the quality of the Relation Extraction process. Targeting the financial domain, we use Semantic Web Technologies to build the domain Knowledgebase, which is in turn exploited to collect distant supervision training data from semantic linked datasets such as DBPedia and Freebase. We conducted a serious of experiments that utilise the number of Machine Learning algorithms to report on the favourable implementations/configuration for successful Information Extraction for our targeted domain. © 2015 by SCITEPRESS - Science and Technology Publications, Lda.
835. Radyoterapi uygulamaları için otomatik iris lokalizasyonu
- Subjects
Öğrenme algoritması ,Görüntü işleme ,Sınıflandırma ,İmage processing ,Classification ,İris takibi/lokalizasyonu ,Learning algorithm ,İris detection/localization - Abstract
Uveal melanom, erişkinlerde yaygın olarak görülen göz içi bir tümör tipi olmakla beraber, görme bozukluklarından göz kayıplarına hatta yüksek metastaz riski ile beraber yaşam kaybına sebep olabilmektedir. Geçmişten günümüze birçok tedavi yöntemi denenmekle birlikte hastalığın durumuna göre gözün tamamen çıkarılması ile sonuçlanabilmekte ya da radyoterapi tedavileriyle iyileştirilmesi sağlanabilmektedir. Günümüzde enükleasyon (organın çıkarılması) en son başvurulacak yol olarak görülmekte, bu noktada radyoterapi uygulamaları büyük önem kazanmaktadır. Radyoterapi tedavisinde yüksek teknolojili cihazlara gereksinim duyulmakta, radyoaktif ışımalarla göz tedavisi hassas bir biçimde yürütülmektedir. Operasyon sırasında kullanılan uyuşturma yöntemi, blefarosta ve vakum halkası gibi bazı ekipmanlar göz sağlığını riske atmakta, hastanın konforsuz bir tedavi geçirmesine sebep olabilmektedir. Bu tez çalışmasında, bahsedilen risklerin ve oluşabilecek olumsuz durumların önüne geçilebilmesi için göz operasyonlarında faydalanılmak üzere iris ve göz kapaklarının mevcut anlık durumlarına göre gözün uyuşturulmadan radyoterapi yapılabilmesini sağlayabilecek bir yöntem önerilmiştir. Bu algoritma, gerçek zamanlı bir sistem olup, çalışılmasında elde edilen görüntü örneklerinin bir öğrenme algoritması şemasına göre gözün açık veya kapalı olarak sınıflandırılmasına dayanmaktadır. Göz kapalı iken radyoterapi kesilecektir ve sağlıklı dokuların radyoterapiye maruz kalması önlenecektir. Algoritmanın tepki hızının gerçeğe yakın olması, böylelikle gözün mevcut durumunun herhangi bir değişikliğine anlık tepki vermesi hedeflenmektedir. Çalışmanın motivasyonu son yıllarda revaçta olan ve insan-bilgisayar ara yüzü, yorgunluk detektörü, ifade analizi gibi çalışmalarda kullanılmaya başlanılan iris takibi, lokalizasyonu ve göz kırpma sezimi konularının temeline dayanmaktadır. Çalışmanın prensibi genel olarak değişken ortam koşullarında alınan gerçek zamanlı video görüntüsünün lineer sınıflandırma ile işlenerek gözün açık veya kapalı durumda olduğunun saptanmasına dayanmaktadır. Çalışmada, ilk olarak alınan görüntüler üzerinde önişlemler yapılmış, iris tespit edilmiştir. Sonraki adımda ayırıcı performansı yüksek olan ve aynı zamanda işlemsel karmaşıklığı az olan öznitelikler belirlenerek aydınlık ve karanlık şartlar altında olmak üzere açık ve kapalı göz için kullanılarak öznitelik vektörleri oluşturulmuştur. Bu öznitelik vektörleri lineer sınıflandırma yöntemi ile sınıflandırılmış ve sistemin sınıflandırma performansı değerlendirilmiştir. Sınıflandırma performansı en yüksek görülen üç öznitelik vektörü oluşan bir öznitelik uzayı üzerinde çalışılmış ve başarılı sınıflandırma sonuçlarına ulaşılmıştır. Sistem çapraz geçerlilik metoduyla test edilmiş ve sınanmıştır. Işıklandırma koşullarından bağımsız olarak sistemin hızlı ve hatasız olarak çalıştığı görülmüştür., Uveal melanom, being a widespread neoplasm in the eye, may cause vision defect, loss of the eye, even loss of life with the high level risk of metastasis. From past to the present there have been tried many treatment methods which are varying from enucleation at the end of treatment of the eye, to the medication with radiotherapy threatment in accordance with the status of the disease. Today, enucleation is the last way to apply, so radiotherapy occupies an important place. There is need for high-technology devices in radiotherapy treatment, the treatment is conducted sensitively with radioactive beam radiation. Some equipments used during operation like blepharostat and vacuum rings may lead some complications during operation and can make the patients have uncomfortable treatment. In this thesis, in order to eliminate the possible negative effects of therapy, we studied an algorithm which is for use in eye surgery, that can fix the therapy in accordance with the present instant state of iris and eyelids. This algorithm will be a real time system and will be based on the classification of the acquired image frames according to a learning algorithm. It is aimed that the speed of the algorithm is as close as possible to real time so that it can response instantly to any instant change in present eye state. The motivation of the study is based upon iris tracking, localization and blink detection which is now used frequently for human computer interface, fatique detector, face analysis topics. The principle of the study is based on the detection of eye state, i.e. open or closed, in varying lighting conditions by a linear classification on real-time video frames. In the study, iris is detected first, then some features which have low computational complexity and high classification performance are determined. Feature vectors are formed for bright and dark lighting conditions and these vectors were classified with linear discriminant analysis. We had successful results with chosen features and examined with cross validation method. There were no decremental effects of harder conditions.
836. LEARNING BY FUZZIFIED NEURAL NETWORKS
- Author
-
Hisao Ishibuchi, I.B. Turksen, and Kouichi Morioka
- Subjects
Adaptive neuro fuzzy inference system ,Fuzzy classification ,Neuro-fuzzy ,business.industry ,Mathematics::General Mathematics ,Applied Mathematics ,fuzzy inputs ,fuzzy connection weights ,Fuzzy logic ,Defuzzification ,fuzzy targets ,feedforward neural networks ,Theoretical Computer Science ,learning algorithm ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial Intelligence ,Fuzzy number ,Fuzzy set operations ,Fuzzy associative matrix ,Artificial intelligence ,ComputingMethodologies_GENERAL ,fuzzification of neural networks ,business ,Software ,Mathematics - Abstract
We derive a general learning algorithm for training a fuzzified feedforward neural networks that has fuzzy inputs, fuzzy targets, and fuzzy conncetion weights. The derived algorithm is applicable to the learning of fuzzy connection weights with various shapes such as triangular and trapezoid. First we briefly describe how a feedforward neural network can be fuzzified. Inputs, targets, and connection weights in the fiuzzified neural network can be fuzzy numbers. Next we define a cost function that measures the differences between a fuzzy target vector and an actual fuzzy output vector. Then we derive a learning algorithm from the cost function for adjusting fuzzy connection weights. Finally we show some results of computer simulations.
837. On modeling channel selection in LTE-U as a repeated game
- Author
-
Hamed Ahmadi, Jordi Pérez-Romero, Irene Macaluso, Oriol Sallent, Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, and Universitat Politècnica de Catalunya. GRCM - Grup de Recerca en Comunicacions Mòbils
- Subjects
Mathematical optimization ,Artificial intelligence ,Learning (artificial intelligence) ,Computer science ,Iterative trial and error learning-best action ,050801 communication & media studies ,Throughput ,02 engineering and technology ,Nash equilibrium ,symbols.namesake ,0508 media and communications ,0202 electrical engineering, electronic engineering, information engineering ,LTE-U ,Jocs, Teoria de ,Learning algorithm ,Selection (genetic algorithm) ,Matemàtiques i estadística::Investigació operativa::Teoria de jocs [Àrees temàtiques de la UPC] ,Game theory ,Long term evolution ,business.industry ,Intel·ligència artificial ,05 social sciences ,020206 networking & telecommunications ,Long term evolution unlicensed ,Telecommunication channels ,Modeling channel selection ,Channel selection problem ,Repeated game ,symbols ,Informàtica::Intel·ligència artificial [Àrees temàtiques de la UPC] ,business ,Communication channel - Abstract
This paper addresses the channel selection problem for Long Term Evolution Unlicensed (LTE-U). Channel selection is a frequency-domain mechanism that facilitates the coexistence of multiple networks sharing the unlicensed band. In particular, the paper considers a fully distributed approach where each small cell autonomously selects the channel to set-up an LTE-U carrier. The problem is modeled using a non-cooperative repeated game and the Iterative Trial and Error Learning - Best Action (ITEL-BA) learning algorithm is used to drive convergence towards a Nash Equilibrium. The proposed approach is evaluated by means of simulations in different situations analyzing both the throughput performance and the convergence behavior.
838. Simple Decentralized Algorithm for Coordination Games
- Author
-
Mihail Emilov Mihaylov, Karl Tuyls, Ann Nowe, and Computational Modelling
- Subjects
learning algorithm ,pure coordination games ,decentralized coordination ,artificial intelligence - Abstract
Many biological or computer systems are comprised of intelligent, but highly constrained agents with com- mon objectives that are beyond the capabilities of the individual. Often such multi-agent systems (MASs) are inherently decentralized and therefore agents need to coordinate their behavior in the absence of central control to achieve their design objectives. Wireless sensor networks (WSNs) are an example of a decen- tralized MAS where sensor nodes gather environmen- tal data and collectively forward it towards the base station of the observer. The limited resources of such sensor nodes and the lack of global knowledge make the design of a WSN application challenging. The main question we are concerned with is the following: based only on local interactions and incomplete knowledge how can the designer of a decentralized system make agents achieve good collective performance imposing minimal system requirements and overhead?
839. Content Search Through Comparisons
- Author
-
Laurent Massoulié, Stratis Ioannidis, and Amin Karbasi
- Subjects
Mathematical optimization ,Small-world network ,Computer science ,comparisons ,Upper and lower bounds ,small-world network ,Target distribution ,Network planning and design ,learning algorithm ,comparison ,doubling measure ,Search cost ,Entropy (information theory) ,heterogeneous demand ,content-search ,entropy ,navigation - Abstract
We study the problem of navigating through a database of similar objects using comparisons under heterogeneous demand, a prob- lem strongly related to small-world network design. We show that, under heterogeneous demand, the small-world network design problem is NP- hard. Given the above negative result, we propose a novel mechanism for small-world network design and provide an upper bound on its perfor- mance under heterogeneous demand. The above mechanism has a natural equivalent in the context of content search through comparisons, again under heterogeneous demand; we use this to establish both upper and lower bounds on content search through comparisons. These bounds are intuitively appealing, as they depend on the entropy of the demand as well as its doubling constant, a quantity capturing the topology of the set of target objects. Finally, we propose an adaptive learning algorithm for content search that meets the performance guarantees achieved by the above mechanisms.
840. Learning Algorithms for Two-Person Zero-Sum Stochastic Games with Incomplete Information
- Author
-
Lakshmivarahan, S. and Narendra, Kumpati S.
- Published
- 1981
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.