557 results
Search Results
202. Cooperative Indoor Radio Environment Mapping in Ad-hoc Wireless Cognitive Networks.
- Author
-
Zrno, Damir, Šimunić, Dina, and Prasad, Ramjee
- Subjects
ENVIRONMENTAL mapping ,AD hoc computer networks ,ALGORITHMS ,STANDARD deviations ,SIMULATION methods & models - Abstract
This paper presents a radio environment mapping method developed for use with ad-hoc wireless networks. Variations of the mapping algorithm were implemented in environment visualization office simulation platform developed in Matlab as a result of cooperation between Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia, and Center for TeleInfrastruktur, University of Aalborg, Denmark. Mapping algorithm, theoretical background and variations, as well as the cooperative mapping scheme are described. Simulation results for environment mapping are given for several algorithm variations on the sample scenario with and without cooperative mapping protocols in order to optimize the algorithm. Mapping algorithm shows accurate results with 0.15 dB mean error and as low as 2.3 dB standard deviation. Cooperative mapping is shown to provide a clear advantage in both mapping speed and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
203. Hyperspectral Imagery Denoising Using a Spatial-Spectral Domain Mixing Prior.
- Author
-
Chen, Shao-Lin, Hu, Xi-Yuan, and Peng, Si-Long
- Subjects
HYPERSPECTRAL imaging systems ,SPECTRUM analysis ,MATHEMATICAL optimization ,SIMULATION methods & models ,ALGORITHMS ,COMPUTER programming - Abstract
By introducing a novel spatial-spectral domain mixing prior, this paper establishes a Maximum a posteriori (MAP) framework for hyperspectral images (HSIs) denoising. The proposed mixing prior takes advantage of different properties of HSI in the spatial and spectral domain. Furthermore, we propose a spatially adaptive weighted prior combining smoothing prior and discontinuity-preserving prior in the spectral domain. The weights can be defined as a function of the spectral discontinuity measure (DM). For minimizing the objective function, a half-quadratic optimization algorithm is used. The experimental results illustrate that our proposed model can get a higher signal-to-noise ratio (SNR) than using only smoothing prior or discontinuity-preserving prior. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
204. Extracting the Virtual Reflected Wavelet from TEM Data Based on Regularizing Method.
- Author
-
Xue, Guo-qiang, Bai, Chao-ying, and Li, Xiu
- Subjects
HEAT equation ,WAVELETS (Mathematics) ,WAVE equation ,ALGORITHMS ,SIMULATION methods & models ,GEOPHYSICS - Abstract
A pseudo-seismic interpretation method is an alternative way to process and explain transient electromagnetic (TEM) data, and has become a popular research field in recent years. TEM signals which satisfy the diffusion equation can be converted by means of a mathematical transformation into ones which obey the wave equation. For an ill-posed problem of this kind of transformation, a sub-regularization algorithm is developed in this paper to extract a virtual wavelet of the TEM field. According to the conventional designation of TEM recordings, the entire integration period is divided into seven time intervals. In order to avoid low accuracy in the calculations, high-density wavefield data has been calculated based on the former sub-division. Therefore, the virtual wavelet can be extracted successfully by using an optimized algorithm to obtain high-density integral coefficients for all time windows, and a satisfactory condition number of the coefficient matrix while taking a different channel number in each time period. The Tikhonov regularization inversion scheme is used to determine the optimal parameters based on minimizing a least squares misfit, and the Newton iterative formula is used to obtain optimal regularization parameters. Both synthetic model simulations and a real data interpretation example indicate that the proposed pseudo-seismic wavefield method is a suitable alternative way to interpret TEM data. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
205. No-wait flowshop scheduling problem to minimize the number of tardy jobs.
- Author
-
Aldowaisan, Tariq and Allahverdi, Ali
- Subjects
SIMULATED annealing ,ALGORITHMS ,HEURISTIC ,ERROR analysis in mathematics ,MANUFACTURING industries ,SCHEDULING ,SIMULATION methods & models - Abstract
This research addresses the m-machine no-wait flowshop scheduling problem with the objective of minimizing the number of tardy jobs. The problem is known to be NP-hard even for the two machines, and literature reveals that no heuristics have been developed for this problem. In this paper, we propose an efficient heuristic based on simulated annealing, where we first adapt the single-machine optimal algorithm to our problem to develop two new heuristics, NOTA and NOTM. An improved simulated annealing heuristic, called SA-GI, is then developed by feeding the best performing heuristic among NOTA, NOTM, and EDD into the simulated annealing algorithm. A second proposed heuristic, called SA-IP, further improves the SA-GI solution by using insertion and pair-wise exchange techniques. Based on the computational experiments, the overall relative percentage errors of the heuristic SA, SA-GI, and SA-IP are 8.848, 8.428, and 0.207, respectively. The computational times of the three heuristics are close to each other, and the largest average time is less than one second, and hence, the computational time is not an issue. Therefore, the heuristic SA-IP is the best one. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
206. Data Analysis and Double Pulse Identification for the MARE Experiment.
- Author
-
Armstrong, J., Galeazzi, M., Prasai, K., Uprety, Y., Saab, T., Zych, K., Agnese, R., and Gatti, F.
- Subjects
NEUTRINO mass ,DATA analysis ,ALGORITHMS ,COMPARATIVE studies ,MONTE Carlo method ,SIMULATION methods & models ,CALORIMETERS - Abstract
One of the critical limits on the sensitivity of direct neutrino mass experiment, such as MARE, is the background generated by unidentified double pulses. In our work developing an analysis system for MARE we have focused on comparing algorithms, and developing and testing an optimum algorithm for the identification (and removal) of double pulses. To test the algorithms we developed a Monte Carlo simulation that, based on the detectors physical parameters, generates realistic single and double events with known characteristics. In this paper we present our preliminary results on two algorithms used for double pulse identification and their efficiency as a function of pulse heights and separation. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
207. Mathematical Modeling and Numerical Algorithms for Simulation of Oil Pollution.
- Author
-
Dang, Quang, Ehrhardt, Matthias, Tran, Gia, and Le, Duc
- Subjects
MATHEMATICAL models ,OIL spills ,ALGORITHMS ,SIMULATION methods & models ,HEAT equation ,EMISSIONS (Air pollution) - Abstract
This paper deals with the mathematical modeling and algorithms for the problem of oil pollution. For solving this task, we derive the adjoint problem for the advection-diffusion equation describing the propagation of oil slick after an accident, which we call the main problem. We prove a fundamental equality between the solutions of the main and the adjoint problems. Based on this equality, we propose a novel method for the identification of the pollution source location and the accident time of oil emission. This approach is illustrated on an example for an accident in the offshore of the central part of the Vietnamese coast. Numerical simulations demonstrate the effectiveness of the proposed method. Besides, the method is verified for 1D model of substance propagation. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
208. From Markovian to pairwise epidemic models and the performance of moment closure approximations.
- Author
-
Taylor, Michael, Simon, Péter, Green, Darren, House, Thomas, and Kiss, Istvan
- Subjects
MARKOV processes ,MARKOV spectrum ,INFECTIOUS disease transmission ,STOCHASTIC analysis ,ALGORITHMS ,SIMULATION methods & models - Abstract
Many if not all models of disease transmission on networks can be linked to the exact state-based Markovian formulation. However the large number of equations for any system of realistic size limits their applicability to small populations. As a result, most modelling work relies on simulation and pairwise models. In this paper, for a simple SIS dynamics on an arbitrary network, we formalise the link between a well known pairwise model and the exact Markovian formulation. This involves the rigorous derivation of the exact ODE model at the level of pairs in terms of the expected number of pairs and triples. The exact system is then closed using two different closures, one well established and one that has been recently proposed. A new interpretation of both closures is presented, which explains several of their previously observed properties. The closed dynamical systems are solved numerically and the results are compared to output from individual-based stochastic simulations. This is done for a range of networks with the same average degree and clustering coefficient but generated using different algorithms. It is shown that the ability of the pairwise system to accurately model an epidemic is fundamentally dependent on the underlying large-scale network structure. We show that the existing pairwise models are a good fit for certain types of network but have to be used with caution as higher-order network structures may compromise their effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
209. Mathematical modeling of distance constraints on two-dimensional φ-objects.
- Author
-
Stoyan, Yu., Pankratov, A., and Romanova, T.
- Subjects
DISTANCES ,MATHEMATICAL models ,MATHEMATICS ,SIMULATION methods & models ,ALGORITHMS - Abstract
This paper introduces the concept of radical-free pseudonormalized Φ-functions, which allows one to describe constraints on minimum and maximum allowable distances between two-dimensional φ-objects. Translations and rotations of φ-objects in a two-dimensional Euclidean space are allowable. The theorem on the existence of a radical-free pseudonormalized Φ-function for a pair of arbitrary-shaped φ-objects whose frontiers are formed by the union of line segments and circular arcs is formulated. An efficient algorithm is proposed to derive pseudonormalized Φ-functions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
210. Sensor Management Under Tracking Accuracy and Energy Constraints in Wireless Sensor Networks.
- Author
-
Zoghi, M. and Kahaei, M.
- Subjects
WIRELESS sensor networks ,ENERGY consumption ,DETECTORS ,SIMULATION methods & models ,ALGORITHMS - Abstract
Copyright of Arabian Journal for Science & Engineering (Springer Science & Business Media B.V. ) is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2012
- Full Text
- View/download PDF
211. A real time traffic simulator utilizing an adaptive fuzzy inference mechanism by tuning fuzzy parameters.
- Author
-
Aksaç, Alper, Uzun, Erkam, and Özyer, Tansel
- Subjects
TRAFFIC engineering ,TRAFFIC signs & signals ,ALGORITHMS ,SIMULATION methods & models ,FUZZY sets - Abstract
Traffic lights are installed at intersections mostly for traffic management. Traffic signals turn on during the amount of time determined. Intelligent traffic management systems emerge as a need to handle the dynamicity of traffic. These systems are first implemented on simulators in order to mimic the real life situations before realization. Yet, we have implemented a real time traffic simulator with an adaptive fuzzy inference algorithm that arranges the foreseen light signal duration. It changes the time duration of lights depending on waiting vehicles behind green and red lights at crossroad. The simulation has also been supported with real time graphical visualization. Given a scenario, it creates random traffic flows according to specified parameters. Next, obtained results have been interpreted in the simulation environment. According to inferences from adaptive environment, TSK (Takagi-Sugeno-Kang) and Mamdani models have also been implemented to give baselines for verification. Several experiments have been conducted and compared against classical techniques such as Webster () Road research technical paper No 39 and HCM () TRB, special report 209, statistically to demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
212. Distributed reactive collision avoidance.
- Author
-
Lalish, Emmett and Morgansen, Kristi
- Subjects
COLLISIONS (Physics) ,ALGORITHMS ,SAFETY ,SIMULATION methods & models ,ROBUST control - Abstract
The work contained in this paper concerns a novel approach to the n-vehicle collision avoidance problem. The vehicle model used here allows for three-dimensional movement and represents a wide range of vehicles. The algorithm works in conjunction with any desired controller to guarantee all vehicles remain free of collisions while attempting to follow their desired control. This algorithm is reactive and distributed, making it well suited for real time applications, and explicitly accounts for actuation limits. A robustness analysis is presented which provides a means to account for delays and unmodeled dynamics. Robustness to an adversarial vehicle is also presented. Results are demonstrated in simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
213. Training recurrent neural networks using a hybrid algorithm.
- Author
-
Nasr, Mounir and Chtourou, Mohamed
- Subjects
RECURSIVE sequences (Mathematics) ,ARTIFICIAL neural networks ,ALGORITHMS ,SUPERVISED learning ,CONJUGATE gradient methods ,PERFORMANCE evaluation ,SIMULATION methods & models ,BACK propagation - Abstract
This paper proposes a new hybrid approach for recurrent neural networks (RNN). The basic idea of this approach is to train an input layer by unsupervised learning and an output layer by supervised learning. In this method, the Kohonen algorithm is used for unsupervised learning, and dynamic gradient descent method is used for supervised learning. The performances of the proposed algorithm are compared with backpropagation through time (BPTT) on three benchmark problems. Simulation results show that the performances of the new proposed algorithm exceed the standard backpropagation through time in the reduction of the total number of iterations and in the learning time required in the training process. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
214. Pseudorandom Generators, Typically-Correct Derandomization, and Circuit Lower Bounds.
- Author
-
Kinne, Jeff, Melkebeek, Dieter, and Shaltiel, Ronen
- Subjects
SIMULATION methods & models ,ALGORITHMS ,COMPUTATIONAL complexity ,ERROR rates - Abstract
The area of derandomization attempts to provide efficient deterministic simulations of randomized algorithms in various algorithmic settings. Goldreich and Wigderson introduced a notion of 'typically-correct' deterministic simulations, which are allowed to err on few inputs. In this paper, we further the study of typically-correct derandomization in two ways. First, we develop a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings. We show that our technique strictly generalizes an earlier approach by Shaltiel based on randomness extractors and simplifies the proofs of some known results. We also demonstrate that our approach is applicable in algorithmic settings where earlier work did not apply. For example, we present a typically-correct polynomial-time simulation for every language in BPP based on a hardness assumption that is (seemingly) weaker than the ones used in earlier work. Second, we investigate whether typically-correct derandomization of BPP implies circuit lower bounds. Extending the work of Kabanets and Impagliazzo for the zero-error case, we establish a positive answer for error rates in the range considered by Goldreich and Wigderson. In doing so, we provide a simpler proof of the zero-error result. Our proof scales better than the original one and does not rely on the result by Impagliazzo, Kabanets, and Wigderson that NEXP having polynomialsize circuits implies that NEXP coincides with EXP. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
215. Shortcoming, problems and analytical comparison for flooding-based search techniques in unstructured P2P networks.
- Author
-
Barjini, Hassan, Othman, Mohamed, Ibrahim, Hamidah, and Udzir, Nur
- Subjects
PEER-to-peer architecture (Computer networks) ,DATABASE searching ,SIMULATION methods & models ,ALGORITHMS ,TOPOLOGY - Abstract
Peer-to-Peer networks attracted a significant amount of interest because of their capacity for resource sharing and content distribution. Content distribution applications allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content. Searching in unstructured P2P networks is an important problem, which has received considerable research attention. Acceptable searching techniques must provide large coverage rate, low traffic load, and optimum latency. This paper reviews flooding-based search techniques in unstructured P2P networks. It then analytically compares their coverage rate, and traffic overloads. Our simulation experiments have validated analytical results. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
216. Introducing fractal dimension algorithms to calculate the Hurst exponent of financial time series.
- Author
-
Sánchez-Granero, M., Fernández-Martínez, M., and Trinidad-Segovia, J.
- Subjects
TIME series analysis ,MONTE Carlo method ,ALGORITHMS ,SIMULATION methods & models ,STATISTICAL physics ,STOCK exchanges ,NONLINEAR statistical models - Abstract
In this paper, three new algorithms are introduced in order to explore long memory in financial time series. They are based on a new concept of fractal dimension of a curve. A mathematical support is provided for each algorithm and its accuracy is tested for different length time series by Monte Carlo simulations. In particular, in the case of short length series, the introduced algorithms perform much better than the classical methods. Finally, an empirical application for some stock market indexes as well as some individual stocks is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
217. Multicast Link Adaptation in Reliable Transmission Over Geostationary Satellite Networks.
- Author
-
Sali, A., Karim, H., Acar, G., Evans, B., and Giambene, G.
- Subjects
GEOSTATIONARY satellites ,TRANSMITTERS (Communication) ,ALGORITHMS ,MULTICASTING (Computer networks) ,SIMULATION methods & models - Abstract
The exploitation of fluctuating channel conditions in link adaption techniques for unicast transmission has been shown to provide large system capacity gains. However, the problem of choosing transmission rates for multicast transmission has not been thoroughly investigated. In this paper, we investigate multicast adaptive techniques for reliable data delivery in GEO satellite networks. An optimal multicast link adaptation is proposed with the aim to maximise terminal throughput whilst increasing resource utilization and fairness in the face of diverse channel conditions. Via simulation results and theoretical analysis, the proposed algorithm has shown to outperform other alternative multicast link adaptation techniques especially when the terminals are in vigorous channel conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
218. BRA: An Algorithm for Simulating Bounded Rational Agents.
- Author
-
Schuster, Stephan
- Subjects
REINFORCEMENT learning ,DECISION making ,BOUNDED rationality ,ALGORITHMS ,SIMULATION methods & models ,SOCIAL psychology ,GAME theory - Abstract
This paper describes a simulation approach for modelling decision-making processes under incomplete and imperfect information in Agent-based Computational Economics (ACE). The main idea is to represent decision-making in a model-free framework that can be applied to a larger set of simulation problems, not just the domain modelled. The method translates some basic sociopsychological concepts from the bounded rationality and learning literature into an executable algorithm. In a simple example, the algorithm is applied in the domain of behavioural game theory, illustrating how the algorithm can be used to reproduce observed patterns of human behaviour. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
219. Comparison and analysis of eight scheduling heuristics for the optimization of energy consumption and makespan in large-scale distributed systems.
- Author
-
Lindberg, Peder, Leingang, James, Lysaker, Daniel, Khan, Samee, and Li, Juan
- Subjects
HEURISTIC algorithms ,PROGRAM transformation ,ENERGY consumption ,ALGORITHMS ,SCHEDULING software ,SIMULATION methods & models - Abstract
In this paper, we study the problem of scheduling tasks on a distributed system, with the aim to simultaneously minimize energy consumption and makespan subject to the deadline constraints and the tasks' memory requirements. A total of eight heuristics are introduced to solve the task scheduling problem. The set of heuristics include six greedy algorithms and two naturally inspired genetic algorithms. The heuristics are extensively simulated and compared using an simulation test-bed that utilizes a wide range of task heterogeneity and a variety of problem sizes. When evaluating the heuristics, we analyze the energy consumption, makespan, and execution time of each heuristic. The main benefit of this study is to allow readers to select an appropriate heuristic for a given scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
220. A resource selection scheme for QoS satisfaction and load balancing in ad hoc grid.
- Author
-
Li, Chunlin and Li, Layuan
- Subjects
QUALITY of service ,CUSTOMER satisfaction ,AD hoc computer networks ,SIMULATION methods & models ,PERFORMANCE evaluation ,ALGORITHMS - Abstract
Ad hoc grids are highly heterogeneous and dynamic, in which the availability of resources and tasks may change at any time. The paper proposes a utility based resource selection scheme for QoS satisfaction and load balancing in ad hoc grid environments. The proposed scheme intends to maximize the QoS satisfaction of ad hoc grid users and support load balancing of grid resources. For each candidate ad hoc grid resource, the scheme obtains values from the computations of utility function for QoS satisfaction and benefit maximization game for ad hoc grid resource preference. The utility function for QoS satisfaction computes the utility value based on the satisfaction of QoS requirements of the grid user request. The benefit maximization game for grid resource node preference computes the preference value from the resource point of view. Its main goal is to achieve load balancing and decrease the number of resource selection failure. The utility value and the preference value of each candidate ad hoc grid resource are combined to select the most suitable grid resource for ad hoc grid user request. In the simulation, the performance evaluation of proposed algorithm for ad hoc grid is conducted. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
221. Increasing throughput in dense 802.11 networks by automatic rate adaptation improvement.
- Author
-
Cardoso, Kleber and Rezende, José
- Subjects
WIRELESS communications ,IEEE 802.11 (Standard) ,PACKET radio transmission ,ALGORITHMS ,SCALABILITY ,SIMULATION methods & models - Abstract
Rate control algorithms for commercial 802.11 devices strongly rely on packet losses for their adaptation. As a result, they give poor performance in dense networks because they are not able to distinguish packet losses related to channel error from packet losses due to collision. In this paper, we evaluate automatic rate adaptation algorithms in IEEE 802.11 dense networks. A certain number of works in the literature address this problem, but they demand modifications of the IEEE standard, or depend on some special feature not available in off-the-shelf devices. In this context, we propose a new automatic rate control algorithm which is simple, easy to implement, standards-compliant, and well-suited for crowded 802.11 networks. Our approach consists of measuring the contention level, inferring the collision probability, and choosing transmission rates which maximize throughput. Results from simulation and real experiments show throughput improvement of up to 100% from our mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
222. Design of scalable and efficient multi-radio wireless networks.
- Author
-
Benyamina, Djohara, Hafid, Abdelhakim, and Gendreau, Michel
- Subjects
WIRELESS communications ,CONSTRAINT satisfaction ,SCALABILITY ,GATEWAYS (Computer networks) ,ALGORITHMS ,TOPOLOGY ,SIMULATION methods & models - Abstract
A proper design of Wireless Mesh Networks (WMNs) is a fundamental task that should be addressed carefully to allow the deployment of scalable and efficient networks. Specifically, choosing strategic locations to optimally place gateways prior to network deployment can alleviate a number of performance/scalability related problems. In this paper, we first, propose a novel clustering based gateway placement algorithm (CBGPA) to effectively select the locations of gateways. Existing solutions for optimal gateway placement using clustering approaches are tree-based and therefore are inherently less reliable since a tree topology uses a smaller number of links. Independently from the tree structure, CBGPA strategically places the gateways to serve as many routers as possible that are within a bounded number of hops. Next, we devise a new multi-objective optimization approach that models WMN topologies from scratch. The three objectives of deployment cost, network throughput and average congestion of gateways are simultaneously optimized using a nature inspired meta-heuristic algorithm coupled with CBGPA. This provides the network operator with a set of bounded-delay trade-off solutions. Comparative simulation studies with different key parameter settings are conducted to show the effectiveness of CBGPA and to evaluate the performance of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
223. A novel hybrid method based on teaching–learning algorithm and leader progression model for evaluating the lightning performance of launch sites and experimental tests.
- Author
-
Yahyaabadi, M., Aslani, F., and Vahidi, B.
- Subjects
LIGHTNING ,PROCESS optimization ,SURFACE structure ,SURFACE area ,SIMULATION methods & models ,ALGORITHMS ,PLANT propagation - Abstract
In this article, a numerical optimization method has been used to study the lightning effect on satellite launch sites. In the conventional direct method, the leader progression model along with charge simulation method is employed for all the lightning leader tip positions and all the possible lightning current values on the equipment. This time-consuming calculation set is reduced in the developed method by using an optimization algorithm. For this purpose, the specifications of a real air termination system have been used for simulation and a system in a laboratory scales has been used for both purposes of simulation and practical tests. Teaching–learning-based optimization (TLBO) algorithm has been employed as an intelligent method to find the number of lightning strikes to the equipment, to detect critical areas on the structure surface and to reduce the execution time. Also, some experimental tests conducted in a high-voltage laboratory are referred on the small-scale system to investigate the conditions of leaders inception and propagation. An appropriate adaptation between the simulation results of TLBO and conventional direct method verifies the application of this method to investigate the performance of other air termination systems. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
224. Conditioning Facies Simulations with Connectivity Data.
- Author
-
Renard, Philippe, Straubhaar, Julien, Caers, Jef, and Mariethoz, Grégoire
- Subjects
GEOLOGICAL statistics ,RESERVOIRS ,SIMULATION methods & models ,HYDROGEOLOGY ,ALGORITHMS - Abstract
When characterizing and simulating underground reservoirs for flow simulations, one of the key characteristics that needs to be reproduced accurately is its connectivity. More precisely, field observations frequently allow the identification of specific points in space that are connected. For example, in hydrogeology, tracer tests are frequently conducted that show which springs are connected to which sink-hole. Similarly well tests often allow connectivity information in a petroleum reservoir to be provided. To account for this type of information, we propose a new algorithm to condition stochastic simulations of lithofacies to connectivity information. The algorithm is based on the multiple-point philosophy but does not imply necessarily the use of multiple-point simulation. However, the challenge lies in generating realizations, for example of a binary medium, such that the connectivity information is honored as well as any prior structural information (e.g. as modeled through a training image). The algorithm consists of using a training image to build a set of replicates of connected paths that are consistent with the prior model. This is done by scanning the training image to find point locations that satisfy the constraints. Any path (a string of connected cells) between these points is therefore consistent with the prior model. For each simulation, one sample from this set of connected paths is sampled to generate hard conditioning data prior to running the simulation algorithm. The paper presents in detail the algorithm and some examples of two-dimensional and three-dimensional applications with multiple-point simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
225. Data array multiprocessing by difference slices.
- Author
-
Martyniuk, T. and Khomyuk, V.
- Subjects
ALGORITHMS ,PARALLEL algorithms ,MULTIPROCESSORS ,SIMULATION methods & models ,MATHEMATICAL analysis - Abstract
The paper analyzes the features of the presentation of parallel algorithms for the multiprocessing of vector data arrays by difference slices in the basis of a modified Glushkov SAA. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
226. Interference-Adaptive Fast Dynamic Channel Allocation in TD-SCDMA System.
- Author
-
Min, Tae, Kim, Byeong, Kim, Joung, and Kang, Chung
- Subjects
RADIO resource management ,CODE division multiple access ,ELECTRIC interference ,SIMULATION methods & models ,ALGORITHMS ,PERSONAL communication service systems - Abstract
This paper proposes a new priority metric for fast dynamic channel allocation (DCA) in TD-SCDMA system, which reallocates radio resource units (RUs) to bearer services in a cell. It allows for developing a new interference-adaptive fast DCA algorithm, which is more flexible with a non-uniform user distribution. It considers the relative transmission opportunities with respect to the residual capacity and co-channel interference levels for all users, which steadily varies in the real communication environment. The proposed fast DCA algorithm aims to fully utilize the physical resource available in the time-division duplexing (TDD)-based CDMA system subject to the various types of inter-cell interference, as opposed to most existing algorithms in which traffic load and quality of service cannot be jointly balanced among the multiple radio resource units in a flexible manner. The simulation results show that the proposed algorithm improves the outage performance while reducing the average system interference, achieving full utilization of the physical resource, i.e., 48 RUs in TD-SCDMA, over a wide range of acceptable outage performance. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
227. P2 q hierarchical decomposition algorithm for quantile optimization: application to irrigation strategies design.
- Author
-
Crespo, O., Bergez, J., and Garcia, F.
- Subjects
DECISION theory ,DECOMPOSITION method ,INDUSTRIAL efficiency ,STANDARD deviations ,SIMULATION methods & models ,ALGORITHMS - Abstract
Decision theory dealing with uncertainty is usually considering criteria such as expected, minimum or maximum values. In economic areas, the quantile criterion is commonly used and provides significant advantages. This paper gives interest to the quantile optimization in decision making for designing irrigation strategies. We developed P2 q, a hierarchical decomposition algorithm which belongs to the branching methods family. It consists in repeating the creation, evaluation and selection of smaller promising regions. Opposite to common approaches, the main criterion of interest is the α-quantile where α is related to the decision maker risk acceptance. Results of an eight parameters optimization problem are presented. Quantile optimization provided optimal irrigation strategies that differed from thus reached with expected value optimization, responding more accurately to the decision maker preferences. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
228. Simulation and Estimation of Extreme Quantiles and Extreme Probabilities.
- Author
-
Guyader, Arnaud, Hengartner, Nicolas, and Matzner-Løber, Eric
- Subjects
SIMULATION methods & models ,RANDOM numbers ,DISTRIBUTION (Probability theory) ,MATHEMATICAL mappings ,ALGORITHMS ,MONTE Carlo method ,DIGITAL watermarking ,MULTILEVEL models - Abstract
Let X be a random vector with distribution μ on ℝ and Φ be a mapping from ℝ to ℝ. That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
229. Approximation in the M/ G/1 Queue with Preemptive Priority.
- Author
-
Hamadouche, Naima and Aïssani, Djamil
- Subjects
APPROXIMATION theory ,MARKOV processes ,PERTURBATION theory ,MATHEMATICAL inequalities ,ERROR analysis in mathematics ,ALGORITHMS ,SIMULATION methods & models - Abstract
The main purpose of this paper is to use the strong stability method to approximate the characteristics of the M/ G/1 queue with preemptive priority by those of the classical M/ G/1 queue. The latter is simpler and more exploitable in practice. After perturbing the arrival intensity of the priority requests, we derive the stability conditions and next obtain the stability inequalities with an exact computation of constants. From those theoretical results, we elaborate an algorithm allowing us to verify the approximation conditions and to provide the made numerical error. In order to have an idea about the efficiency of this approach, we consider a concrete example whose results are compared with those obtained by simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
230. Physicochemical simulation of the evolution of small lakes in a cold climate.
- Author
-
Sklyarova, O., Chudnenko, K., and Bychinskii, V.
- Subjects
ALGORITHMS ,SIMULATION methods & models ,HYDROGEOLOGY ,LAKES ,COLD (Temperature) - Abstract
The paper presents a generalized algorithm for the simulation of multiyear cycles in variations of the chemical composition of lake waters with regard for the seasonal specifics of hydrogeochemical processes. Data were obtained on the behavior of the hydrogeological system during a time span of 500-1000 years. Each of the simulated model cycles involved a successively alternating 'summer-winter' time periods. Terrestrial exchange fluxes between reservoirs, groundwater inflow, falls of atmospheric precipitate, and the evaporation of lake water were taken into account for summer periods, whereas winter conditions were simulated as corresponding to the development of the ice phase, the absence of water exchange fluxes, a change from oxidizing to reducing conditions, and the burial of solid phases in the sediments. The results of our physicochemical simulations with the use of data on the composition of natural hydrogeological systems are in good agreement with natural observations and make it possible to realistically predict the evolution of small lakes in the Ol'khon area. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
231. Classification-based self-adaptive differential evolution with fast and reliable convergence performance.
- Author
-
Xiao-Jun Bi and Jing Xiao
- Subjects
PARAMETER estimation ,ALGORITHMS ,DIFFERENTIAL equations ,ADAPTIVE control systems ,SIMULATION methods & models - Abstract
To avoid the problems of slow and premature convergence of the differential evolution (DE) algorithm, this paper presents a new DE variant named p-ADE. It improves the convergence performance by implementing a new mutation strategy 'DE/rand-to-best/pbest', together with a classification mechanism, and controlling the parameters in a dynamic adaptive manner, where the 'DE/rand-to-best/pbest' utilizes the current best solution together with the best previous solution of each individual to guide the search direction. The classification mechanism helps to balance the exploration and exploitation of individuals with different fitness characteristics, thus improving the convergence rate. Dynamic self-adaptation is beneficial for controlling the extent of variation for each individual. Also, it avoids the requirement for prior knowledge about parameter settings. Experimental results confirm the superiority of p-ADE over several existing DE variants as well as other significant evolutionary optimizers. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
232. A memetic ant colony optimization algorithm for the dynamic travelling salesman problem.
- Author
-
Mavrovouniotis, Michalis and Yang, Shengxiang
- Subjects
ANT colonies ,COMBINATORIAL optimization ,GLOBAL environmental change ,ALGORITHMS ,STOCHASTIC convergence ,SIMULATION methods & models - Abstract
nt colony optimization (ACO) has been successfully applied for combinatorial optimization problems, e.g., the travelling salesman problem (TSP), under stationary environments. In this paper, we consider the dynamic TSP (DTSP), where cities are replaced by new ones during the execution of the algorithm. Under such environments, traditional ACO algorithms face a serious challenge: once they converge, they cannot adapt efficiently to environmental changes. To improve the performance of ACO on the DTSP, we investigate a hybridized ACO with local search (LS), called Memetic ACO (M-ACO) algorithm, which is based on the population-based ACO (P-ACO) framework and an adaptive inver-over operator, to solve the DTSP. Moreover, to address premature convergence, we introduce random immigrants to the population of M-ACO when identical ants are stored. The simulation experiments on a series of dynamic environments generated from a set of benchmark TSP instances show that LS is beneficial for ACO algorithms when applied on the DTSP, since it achieves better performance than other traditional ACO and P-ACO algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
233. Optimal Dynamic Framed Slotted ALOHA Based Anti-collision Algorithm for RFID Systems.
- Author
-
Deng, Der-Jiunn and Tsao, Hsuan-Wei
- Subjects
RADIO frequency identification systems ,RADIO frequency ,SIMULATION methods & models ,DATA integrity ,ALGORITHMS ,DATA analysis ,DATA transmission systems - Abstract
In Radio Frequency IDentification (RFID) system, one of the most important issues that affect the data integrity is the collision resolution between the tags when these tags transmit their data to reader. In majority of tag anti-collision algorithm, Dynamic Framed Slotted Aloha (DFSA) has been employed as a popular collision resolution algorithm to share the medium when multiple tags respond to the reader's signal command. According to previous works, the performance of DFSA algorithm is optimal when the frame size equals to the number of un-identified tags inside the interrogation zone. However, based on our research results, when the frame size equals to number of tags, collision occurs frequently, and this severely affects the system performance because it causes power consumption and longer tag reading time. Since the proper choice of the frame size has a great influence on overall system performance, in this paper we develop an analytical model to study the system throughput of DFSA based RFID systems, and then we use this model to search for an optimal frame size that maximizes the system throughput based on current number of un-identified tags. In addition to theoretical analysis, simulations are conducted to evaluate its performance. Comparing with the traditional DFSA anti-collision algorithm, the simulation results show that the proposed scheme reaches better performance with respect to the tag collision probability and tag reading time. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
234. Composite Function Wavelet Neural Networks with Differential Evolution and Extreme Learning Machine.
- Author
-
Jiuwen Cao, Zhiping Lin, and Guang-Bin Huang
- Subjects
ARTIFICIAL neural networks ,WAVELETS (Mathematics) ,ALGORITHMS ,SIMULATION methods & models ,APPROXIMATION theory ,MATHEMATICAL functions ,LEARNING ,REGRESSION analysis ,MACHINERY - Abstract
In this paper, we introduce a new learning method for composite function wavelet neural networks (CFWNN) by combining the differential evolution (DE) algorithm with extreme learning machine (ELM), in short, as CWN-E-ELM. The recently proposed CFWNN trained with ELM (CFWNN-ELM) has several promising features. But the CFWNN-ELM may have some redundant nodes due to the number of hidden nodes assigned a priori and the input weight matrix and the hidden node parameter vector randomly generated once and never changed during the learning phase. The introduction of DE into CFWNN-ELM is to search for the optimal network parameters and to reduce the number of hidden nodes used in the network. Simulations on several artificial function approximations, real-world data regressions and a chaotic signal prediction problem show some advantages of the proposed CWN-E-ELM. Compared with CFWNN-ELM, CWN-E-ELM has a much more compact network size and Compared with several relevant methods, CWN-E-ELM is able to achieve a better generalization performance. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
235. The partial sequenced route query with traveling rules in road networks.
- Author
-
Chen, Haiquan, Ku, Wei-Shinn, Sun, Min-Te, and Zimmermann, Roger
- Subjects
INTELLIGENT transportation systems ,GEOGRAPHIC information systems ,ADVANCED traveler information systems ,LOCATION-based services ,ALGORITHMS ,SIMULATION methods & models ,COMPUTER simulation - Abstract
In modern geographic information systems, route search represents an important class of queries. In route search related applications, users may want to define a number of traveling rules (traveling preferences) when they plan their trips. However, these traveling rules are not considered in most existing techniques. In this paper, we propose a novel spatial query type, the multi-rule partial sequenced route (MRPSR) query, which enables efficient trip planning with user defined traveling rules. The MRPSR query provides a unified framework that subsumes the well-known trip planning query (TPQ) and the optimal sequenced route (OSR) query. The difficulty in answering MRPSR queries lies in how to integrate multiple choices of points-of-interest (POI) with traveling rules when searching for satisfying routes. We prove that MRPSR query is NP-hard and then provide three algorithms by mapping traveling rules to an activity on vertex network. Afterwards, we extend all the proposed algorithms to road networks. By utilizing both real and synthetic POI datasets, we investigate the performance of our algorithms. The results of extensive simulations show that our algorithms are able to answer MRPSR queries effectively and efficiently with underlying road networks. Compared to the Light Optimal Route Discoverer (LORD) based brute-force solution, the response time of our algorithms is significantly reduced while the distances of the computed routes are only slightly longer than the shortest route. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
236. Subjective Quality Assessment of H.264/AVC Video Streaming with Packet Losses.
- Author
-
De Simone, Francesca, Naccari, Matteo, Tagliasacchi, Marco, Dufaux, Frederic, Tubaro, Stefano, and Ebrahimi, Touradj
- Subjects
STREAMING video & television ,ALGORITHMS ,VIDEOS ,SIMULATION methods & models ,STREAMING technology ,IMAGE quality analysis - Abstract
Research in the field of video quality assessment relies on the availability of subjective scores, collected by means of experiments in which groups of people are asked to rate the quality of video sequences. The availability of subjective scores is fundamental to enable validation and comparative benchmarking of the objective algorithms that try to predict human perception of video quality by automatically analyzing the video sequences, in a way to support reproducible and reliable research results. In this paper, a publicly available database of subjective quality scores and corrupted video sequences is described. The scores refer to 156 sequences at CIF and 4CIF spatial resolutions, encoded with H.264/AVC and corrupted by simulating the transmission over an error-prone network. The subjective evaluation has been performed by 40 subjects at the premises of two academic institutions, in standard-compliant controlled environments. In order to support reproducible research in the field of full-reference, reduced-reference, and no-reference video quality assessment algorithms, both the uncompressed files and the H.264/AVC bitstreams, as well as the packet loss patterns, have been made available to the research community. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
237. Clustering of voltage control areas in power system using shuffled frog-leaping algorithm.
- Author
-
Rameshkhah, F., Abedi, M., and Hosseinian, S.
- Subjects
ELECTRIC potential ,ALGORITHMS ,SIMULATION methods & models ,ARTIFICIAL intelligence ,NUMERICAL analysis ,COMPARATIVE studies ,VOLTAGE regulators ,ELECTRIC power systems - Abstract
Determination of critical voltage control areas (VCAs) is a very important task in both voltage stability assessment and control. However, it is impossible to detect VCAs in real time during appearance of emergency cases for large-scale power systems. Therefore, it is a reasonable solution to employ an artificial intelligent system (AIS) to detect VCAs and to identify the prone buses for monitoring and control purposes as quickly as possible. The training data must contain the simulation results and the historical data collected from a wide range of various emergency cases. Using this database, a clustering process which provides finite clusters of all possible VCAs and a classification function which affiliates each emergency or stress case to its own cluster of VCAs are the main stages to prepare AIS for automatic VCA identification. In this paper a novel data clustering method based on shuffled frog-leaping algorithm (SFLA) is presented for the first task. The results are finite numbers of clustered groups of VCAs with a representative vector of participation factors (PF) for each group. SFLA combines the benefits of the genetic-based memetic algorithms as well as social behavior-based particle swarm optimization methods. In present study the application of SFLA in data clustering is also compared with the most popular analytic algorithm of clustering, K-means, and also with genetic algorithm-based data clustering to demonstrate the validity of proposed clustering method. Numerical results are also presented for IEEE 14-bus test system and an artificial database. The comparative results show the effectiveness of proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
238. A robust training algorithm of discrete-time MIMO RNN and application in fault tolerant control of robotic system.
- Author
-
Wu, Yilei, Sun, Fuchun, Zheng, Jinchuan, and Song, Qing
- Subjects
LYAPUNOV functions ,ALGORITHMS ,ARTIFICIAL neural networks ,COMPUTER science ,SIMULATION methods & models - Abstract
In this paper, a novel robust training algorithm of multi-input multi-output recurrent neural network and its application in the fault tolerant control of a robotic system are investigated. The proposed scheme optimizes the gradient type training on basis of three new adaptive parameters, namely, dead-zone learning rate, hybrid learning rate, and normalization factor. The adaptive dead-zone learning rate is employed to improve the steady state response. The normalization factor is used to maximize the gradient depth in the training, so as to improve the transient response. The hybrid learning rate switches the training between the back-propagation and the real-time recurrent learning mode, such that the training is robust stable. The weight convergence and L stability of the algorithm are proved via Lyapunov function and the Cluett's law, respectively. Based upon the theoretical results, we carry out simulation studies of a two-link robot arm position tracking control system. A computed torque controller is designed to provide a specified closed-loop performance in a fault-free condition, and then the RNN compensator and the robust training algorithm are employed to recover the performance in case that fault occurs. Comparisons are given to demonstrate the advantages of the control method and the proposed training algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
239. Combining GA and iterative searching DOA estimation for CDMA signals.
- Author
-
Chang, Ann-Chen and Hung, Jui-Chung
- Subjects
ESTIMATION theory ,CODING theory ,ALGORITHMS ,SIMULATION methods & models ,GENETIC programming ,COMPUTER science - Abstract
This paper deals with direction-of-arrival (DOA) estimation based on iterative searching technique for code-division multiple access signals. It has been shown that the iterative searching technique is more likely to converge to a local maximum, causing errors in DOA estimation. In conjunction with a genetic algorithm for selecting initial search angle, we present an efficient approach to achieve the advantages of iterative DOA estimation with fast convergence and more accuracy estimate over existing conventional spectral searching methods. Finally, several computer simulation examples are provided for illustration and comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
240. Abrasive waterjet machining simulation by SPH method.
- Author
-
Jianming, Wang, Na, Gao, and Wenjun, Gong
- Subjects
ABRASIVE blasting ,WATER jet cutting ,SIMULATION methods & models ,DEFORMATIONS (Mechanics) ,FINITE element method ,ALGORITHMS - Abstract
Abrasive waterjet machining (AWJM) is a non-conventional process. The mechanism of material removing in AWJM for ductile materials and existing erosion models are reviewed in this paper. To overcome the difficulties of fluid–solid interaction and extra-large deformation problem using finite element method (FEM), the SPH-coupled FEM modeling for abrasive waterjet machining simulation is presented, in which the abrasive waterjet is modeled by SPH particles and the target material is modeled by FE. The two parts interact through contact algorithm. The creativity of this model is multi-materials SPH particles, which contain abrasive and water and mix together uniformly. To build the model, a randomized algorithm is proposed. The material model for the abrasive is first presented. Utilizing this model, abrasive waterjet penetrating the target materials with high velocity is simulated and the mechanism of erosion is depicted. The relationship between the depth of penetration and jet parameters, including water pressure and traverse speed, etc., are analyzed based on the simulation. The results agree with the experimental data well. It will be a benefit to understand the abrasive waterjet cutting mechanism and optimize the operating parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
241. Hybrid TOA–AOA Location Positioning Techniques in GSM Networks.
- Author
-
Deligiannis, Nikos and Louvros, Spiros
- Subjects
GSM communications ,DIGITAL signal processing ,EMULATION software ,ALGORITHMS ,TELEPHONE signaling ,SIMULATION methods & models - Abstract
Positioning algorithms and their implementation in mobile networks are being investigated in the literature due to their importance in location services. Nowadays, the need for superior accuracy has cast attention to hybrid positioning techniques. In this paper, we introduce a novel algorithm for the identification of NLOS propagation using both angle and time estimates, which leads to enhanced versions of the Time of Arrivals and Angle of Arrivals positioning methods. Furthermore, a novel GSM procedure for the implementation of the latter techniques is proposed. In contrast to specified network-based GSM solutions (U-TDOA), the proposed requires minimum modifications in the GSM Phase 2+ infrastructure and protocol stack, and therefore increases the upgrade flexibility and minimizes the implementation cost. The proposed GSM positioning procedure has been experimentally validated using a GSM emulator and the modified signalling messages given by a measurement tool of the emulator are exhibited. Finally, the enhanced cost functions are experimentally evaluated using several GSM-like, high-capacity simulation environments and the results have shown significant reduction of the location error compared to the conventional techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
242. Dislocation modeling of quasi-static crack propagation in an elasto-plastic medium.
- Author
-
Stoll, Anke and Wilkinson, Angus J.
- Subjects
ALGORITHMS ,DISLOCATIONS in metals ,CRACKING process (Petroleum industry) ,SIMULATION methods & models - Abstract
A dislocation model for simulating two-dimensional quasi-static crack propagation is presented. The crack and plastic flow along slip planes are described using dislocation dipoles. A stationary crack can be modeled as well as a propagating crack along a straight line inclined at an arbitrary angle to a free surface of a semi-infinite medium. Cracks are also allowed to kink. A superdipole algorithm is introduced to save simulation time without loosing important information and necessary geometric details. It reduces the number of dislocation dipoles on slip planes in the plastic wake. The paper gives results on crack shapes for stationary and advancing cracks as well as it describes how the size of the plastic zone depends on crack inclination angles. Results on stress intensity factors (SIF) are given using two different approaches as well as kinking cracks are introduced and SIF at kinked crack tips are calculated. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
243. Robust Adaptive Beamforming Under Quadratic Constraint with Recursive Method Implementation.
- Author
-
Xin Song, Jinkuan Wang, and Bin Wang
- Subjects
ALGORITHMS ,COMPUTER simulation ,SIMULATION methods & models ,BEAMFORMING ,SIGNAL processing - Abstract
When adaptive arrays are applied to practical problems, the performances of the conventional adaptive beamforming algorithms are known to degrade substantially in the presence of even slight mismatches between the actual and presumed array responses to the desired signal. Similar types of performance degradation can occur because of data nonstationarity and small training sample size, when the signal steering vector is known exactly. In this paper, to account for mismatches, we propose robust adaptive beamforming algorithm for implementing a quadratic inequality constraint with recursive method updating, which is based on explicit modeling of uncertainties in the desired signal array response and data covariance matrix. We show that the proposed algorithm belongs to the class of diagonal loading approaches, but diagonal loading terms can be precisely calculated based on the given level of uncertainties in the signal array response and data covariance matrix. The variable diagonal loading term is added at each recursive step, which leads to a simpler closed-form algorithm. Our proposed robust recursive algorithm improves the overall robustness against the signal steering vector mismatches and small training sample size, enhances the array system performance under random perturbations in sensor parameters and makes the mean output array SINR consistently close to the optimal one. Moreover, the proposed robust adaptive beamforming can be efficiently computed at a low complexity cost compared with the conventional adaptive beamforming algorithms. Computer simulation results demonstrate excellent performance of our proposed algorithm as compared with the existing adaptive beamforming algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
244. Analysis of Shortest Paths and Subscriber Line Lengths in Telecommunication Access Networks.
- Author
-
Gloaguen, C., Fleischer, F., Schmidt, H., and Schmidt, V.
- Subjects
TELECOMMUNICATION ,ESTIMATION theory ,MONTE Carlo method ,ALGORITHMS ,SIMULATION methods & models - Abstract
We consider random geometric models for telecommunication access networks and analyse their serving zones which can be given, for example, by a class of so-called Cox–Voronoi tessellations (CVTs). Such CVTs are constructed with respect to locations of network components, the nucleii of their induced cells, which are scattered randomly along lines induced by a Poisson line process. In particular, we consider two levels of network components and investigate these hierarchical models with respect to mean shortest path length and mean subscriber line length, respectively. We explain point-process techniques which allow for these characteristics to be computed without simulating the locations of lower-level components. We sustain our results by numerical examples which were obtained through Monte Carlo simulations, where we used simulation algorithms for typical Cox–Voronoi cells derived in a previous paper. Also, briefly, we discuss tests of correctness of the implemented algorithms. Finally, we present a short outlook to possible extensions concerning multi-level models and iterated random tessellations. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
245. Convergence Analysis of Non-Negative Matrix Factorization for BSS Algorithm.
- Author
-
Shangming Yang and Zhang Yi
- Subjects
STOCHASTIC convergence ,NONNEGATIVE matrices ,FACTORIZATION ,ALGORITHMS ,BLIND source separation ,VECTOR analysis ,INVARIANT sets ,IMAGE analysis ,SIMULATION methods & models - Abstract
Abstract  In this paper the convergence of a recently proposed BSS algorithm is analyzed. This algorithm utilized KullbackâLeibler divergence to generate non-negative matrix factorizations of the observation vectors, which is considered an important aspect of the BSS algorithm. In the analysis some invariant sets are constructed so that the convergence of the algorithm can be guaranteed in the given conditions. In the simulation we successfully applied the algorithm and its analysis results to the blind source separation of mixed images and signals. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
246. Perfect Simulation of Infinite Range Gibbs Measures and Coupling with Their Finite Range Approximations.
- Author
-
Galves, A., Löcherbach, E., and Orlandi, E.
- Subjects
SIMULATION methods & models ,APPROXIMATION theory ,ALGORITHMS ,ALGEBRA ,PERFECT simulation (Statistics) - Abstract
In this paper we address the questions of perfectly sampling a Gibbs measure with infinite range interactions and of perfectly sampling the measure together with its finite range approximations. We solve these questions by introducing a perfect simulation algorithm for the measure and for the coupled measures. The algorithm works for general Gibbsian interaction under requirements on the tails of the interaction. As a consequence we obtain an upper bound for the error we make when sampling from a finite range approximation instead of the true infinite range measure. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
247. On Expected Probabilistic Polynomial-Time Adversaries: A Suggestion for Restricted Definitions and Their Benefits.
- Author
-
Goldreich, Oded
- Subjects
POLYNOMIALS ,ALGORITHMS ,SIMULATION methods & models ,PROBABILITY theory ,MATHEMATICAL statistics - Abstract
This paper concerns the possibility of developing a coherent theory of security when feasibility is associated with expected probabilistic polynomial-time ( expected PPT). The source of difficulty is that the known definitions of expected PPT strategies (i.e., expected PPT interactive machines) do not support natural results of the type presented below. To overcome this difficulty, we suggest new definitions of expected PPT strategies, which are more restrictive than the known definitions (but nevertheless extend the notion of expected PPT noninteractive algorithms). We advocate the conceptual adequacy of these definitions and point out their technical advantages. Specifically, identifying a natural subclass of black-box simulators, called normal, we prove the following two results: Specifically, a normal black-box simulator is required to make an expected polynomial number of steps, when given oracle access to any strategy, where each oracle call is counted as a single step. This natural property is satisfied by most known simulators and is easy to verify. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
248. Ant colony optimization algorithm for stochastic project crashing problem in PERT networks using MC simulation.
- Author
-
Aghaie, Abdollah and Mokhtari, Hadi
- Subjects
ALGORITHMS ,MONTE Carlo method ,SIMULATION methods & models ,PERT (Network analysis) ,MANUFACTURING processes - Abstract
This paper describes a new approach based on ant colony optimization (ACO) metaheuristic and Monte Carlo (MC) simulation technique, for project crashing problem (PCP) under uncertainties. To our knowledge, this is the first application of ACO technique for the stochastic project crashing problem (SPCP), in the published literature. A confidence-level-based approach has been proposed for SPCP in program evaluation and review technique (PERT) type networks, where activities are subjected to discrete cost functions and assumed to be exponentially distributed. The objective of the proposed model is to optimally improve the project completion probability in a prespecified due date based on a predefined probability. In order to solve the constructed model, we apply the ACO algorithm and path criticality index, together. The proposed approach applies the path criticality concept in order to select the most critical path by using MC simulation technique. Then, the developed ACO is used to solve a nonlinear integer mathematical programming for selected path. In order to demonstrate the model effectiveness, a large scale illustrative example has been presented and several computational experiments are conducted to determine the appropriate levels of ACO parameters, which lead to the accurate results with reasonable computational time. Finally, a comparative study has been conducted to validate the ACO approach, using several randomly generated problems. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
249. An Efficient Time-Domain Algorithm for the Simulation of Heterogeneous Dispersive Structures.
- Author
-
Al-Jabr, A. A. and Alsunaidi, M. A.
- Subjects
ALGORITHMS ,SIMULATION methods & models ,LORENTZ force ,FINITE differences ,TIME-domain analysis - Abstract
In this paper a new general numerical algorithm for the simulation of heterogeneous dispersive structures is presented. The general algorithm is based on the ADE-FDTD approach. It finds its strength in the simulation of cases where different materials with different dispersion types are present. Several numerical examples are presented and results are compared to analytical solutions. While having the same level of accuracy, the proposed algorithm offers savings in both memory and computational requirements, compared to other ADE-based methods. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
250. Using virtual scans for improved mapping and evaluation.
- Author
-
Rolf Lakaemper and Nagesh Adluru
- Subjects
CARTOGRAPHY ,ALGORITHMS ,VIRTUAL reality ,DATA analysis ,AUTONOMOUS robots ,RESCUES ,SIMULATION methods & models - Abstract
Abstract  In this paper we present a system to enhance the performance of feature correspondence based alignment algorithms for laser scan data. We show how this system can be utilized as a new approach for evaluation of mapping algorithms. Assuming a certain a priori knowledge, our system augments the sensor data with hypotheses (âVirtual Scansâ) about ideal models of objects in the robotâs environment. These hypotheses are generated by analysis of the current aligned map estimated by an underlying iterative alignment algorithm. The augmented data is used to improve the alignment process. Feedback between data alignment and data analysis confirms, modifies, or discards the Virtual Scans in each iteration. Experiments with a simulated scenario and real world data from a rescue robot scenario show the applicability and advantages of the approach. By replacing the estimated âVirtual Scansâ with ground truth maps our system can provide a flexible way for evaluating different mapping algorithms in different settings. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.