126 results
Search Results
102. A modified frequency-domain block LMS algorithm with guaranteed optimal steady-state performance.
- Author
-
Lu, Jing, Qiu, Xiaojun, and Zou, Haishan
- Subjects
- *
PERFORMANCE evaluation , *COMPUTATIONAL complexity , *ALGORITHMS , *SIMULATION methods & models , *SIGNALS & signaling , *STOCHASTIC convergence - Abstract
Abstract: The bin-normalized frequency-domain block LMS (FBLMS) algorithm has low computational burden and potential fast convergence; however, it suffers from a biased steady-state solution when the reference signal lags behind the desired signal or the adaptive filter is of insufficient length. This paper proposes a unified framework for the FBLMS algorithm, which can be used to comprehensively analyze its steady-state behavior. Furthermore, a modified FBLMS algorithm with guaranteed optimal steady-state performance is proposed based on the framework. Simulations are carried out to demonstrate the benefit of the proposed algorithm. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
103. A heuristic approach for the green vehicle routing problem with multiple technologies and partial recharges.
- Author
-
Felipe, Ángel, Ortuño, M. Teresa, Righini, Giovanni, and Tirado, Gregorio
- Subjects
- *
HEURISTIC , *ROUTING (Computer network management) , *TECHNOLOGICAL innovations , *SIMULATION methods & models , *ALGORITHMS - Abstract
This paper presents several heuristics for a variation of the vehicle routing problem in which the transportation fleet is composed of electric vehicles with limited autonomy in need for recharge during their duties. In addition to the routing plan, the amount of energy recharged and the technology used must also be determined. Constructive and local search heuristics are proposed, which are exploited within a non deterministic Simulated Annealing framework. Extensive computational results on varying instances are reported, evaluating the performance of the proposed algorithms and analyzing the distinctive elements of the problem (size, geographical configuration, recharge stations, autonomy, technologies, etc.). [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
104. A formal proof of the deadline driven scheduler in PPTL axiomatic system.
- Author
-
Zhang, Nan, Duan, Zhenhua, Tian, Cong, and Du, Dingzhu
- Subjects
- *
ALGORITHMS , *SIMULATION methods & models , *AUTOMATIC theorem proving , *COMPUTER science , *TECHNOLOGY , *LAMMA language , *MATHEMATICS theorems - Abstract
This paper presents an approach for verifying the correctness of the feasibility theorem on the deadline driven scheduler (DDS) with the axiom system of Propositional Projection Temporal Logic (PPTL). To do so, the deadline driven scheduling algorithm is modeled by an MSVL (Modeling, Simulation and Verification Language) program and the feasibility theorem is formulated by PPTL formulas with two parts: a necessary part and a sufficient part. Then, several lemmas are abstracted and proved by means of the axiom system of PPTL. With the help of the lemmas, two parts of the theorem are deduced respectively. This case study convinces us that some real-time properties of systems can be formally verified by theorem proving using the axiom system of PPTL. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
105. A simple tool to evaluate the effect of the urban canyon on daylight level and energy demand in the early stages of building design.
- Author
-
Petersen, Steffen, Momme, Amalie Jin, and Hviid, Christian Anker
- Subjects
- *
ENERGY industries , *CONSTRUCTION , *ENERGY consumption of buildings , *DAYLIGHT , *SIMULATION methods & models , *ALGORITHMS - Abstract
Daylight is a restricted resource in urban contexts. Rooms situated in an urban context often have a significant proportion of the sky and the sun blocked out by the urban building mass. The reduced direct daylight potential makes daylight reflected from outdoor surfaces an important daylight source to the room. It is therefore important to be able to take into account the daylight reflections from the urban environment in early design stage. This paper describes a simplified method that uses a combination of ray-tracing, the luminous exitance method and the concept of the urban canyon to represent daylight levels in rooms situated in an urban setting. The method is implemented in the daylight algorithm of an existing building simulation tool capable of making rapid integrated daylight and thermal simulation. Comparisons with the more sophisticated lighting tool Radiance show a maximum relative error of 17% but it is often much lower. The accuracy of this approach is therefore considered to be adequate for the early stages of the building design process. The results from integrated daylight and thermal simulations are presented to illustrate how the tool can be used to investigate the impact of urban canyon parameters on indoor environment and energy performance. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
106. Iterative lead compensation control of nonlinear marine vessels manoeuvring models.
- Author
-
Herrero, Elías Revestido, Tomás-Rodríguez, M., and Velasco, Francisco J.
- Subjects
- *
MARINE equipment , *SHIPS , *SIMULATION methods & models , *ALGORITHMS , *NONLINEAR statistical models - Abstract
This paper addresses the problem of control design and implementation for a nonlinear marine vessel manoeuvring model. The authors consider a highly nonlinear vessel 4 DOF model as the basis of this work. The control algorithm here proposed consists of a combination of two methodologies: (i) an iteration technique that approximates the original nonlinear model by a sequence of linear time varying equations whose solution converge to the solution of the original nonlinear problem and (ii) a lead compensation design in which for each of the iterated linear time varying system generated, the controller is optimized at each time on the interval for better tracking performance. The control designed for the last iteration is then applied to the original nonlinear problem. Simulations and results here presented show a good performance of the approximation methodology and also an accurate tracking for certain manoeuvring cases under the control of the designed lead controller. The main characteristic of the nonlinear system's response is the reduction of the settling time and the elimination of the steady state error and overshoot. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
107. A comparison between ultrasonic array beamforming and super resolution imaging algorithms for non-destructive evaluation.
- Author
-
Chengguang Fan, Caleap, Mihai, Mengchun Pan, and Drinkwater, Bruce W.
- Subjects
- *
BEAMFORMING , *NONDESTRUCTIVE testing , *HIGH resolution imaging , *ALGORITHMS , *GOLD standard , *SIMULATION methods & models - Abstract
In this paper the total focusing method, the so called gold standard in classical beamforming, is compared with the widely used time-reversal MUSIC super resolution technique in terms of its ability to resolve closely spaced scatterers in a solid. The algorithms are tested with simulated and experimental array data, each containing different noise levels. The performance of the algorithms is evaluated in terms of lateral resolution and sensitivity to noise. It is shown that for the weak noise situation (SNR>20dB), time-reversal MUSIC provides significantly enhanced lateral resolution when compared to the total focusing method, breaking the diffraction limit. However, for higher noise levels, the total focusing method is shown to be robust, whilst the performance of time-reversal MUSIC is degraded. The influence of multiple scattering on the imaging algorithms is also investigated and shown to be small. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
108. Quantifying the effects of data integration algorithms on the outcomes of a subsurface–land surface processes model.
- Author
-
Shen, Chaopeng, Niu, Jie, and Fang, Kuai
- Subjects
- *
DATA integration , *ALGORITHMS , *HYDROLOGIC models , *PARAMETERIZATION , *SIMULATION methods & models , *WATERSHEDS - Abstract
Trans-disciplinary hydrologic models oriented toward practical questions must be accompanied by accurate parameterization techniques. This paper describes the effects of different choices in the integration of various data sources on outcomes of the model Process-based Adaptive Watershed Simulator coupled with the Community Land Model (PAWS + CLM). Using our Hierarchical Stochastic Selection method, the represented land use percentages are much closer to the raw dataset, and lead to a 26% difference in carbon flux from that of the traditional dominant classes method. River bed elevations extracted using a novel algorithm agree well with the groundwater table and significantly increase baseflow contribution to streams relative to a coarse-DEM-based model. The inclusion of additional information in the soil pedotransfer functions drastically shifts ET, Net Primary Production and recharge. These results indicate that judicious treatment of input data has strong impacts on hydrologic and ecosystem fluxes. We emphasize the need to report details of data integration procedures. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
109. Distributed consensus for sampled-data control multi-agent systems with missing control inputs.
- Author
-
Zhang, Wentao and Liu, Yang
- Subjects
- *
DATA analysis , *ALGORITHMS , *SIMULATION methods & models , *LAPLACIAN matrices , *SPECTRUM analysis - Abstract
Abstract: This paper is concerned about the distributed consensus problem for general sampled-data control multi-agent systems without or with a leader under control inputs missing over some sampling intervals. For these two cases, a distributed adaptive dynamical consensus algorithm is proposed based only on the relative information of network structure. Under the assumption that the dynamical network contains a directed spanning tree, some sufficient consensus conditions are induced under a switched model. These obtained consensus criteria are independent of dynamical network topology, namely, they are valid for both undirected and directed topology. Besides, they do not rely on the spectra and the eigenvalue of Laplacian matrix. Furthermore, some quantitative relations included in dynamical systems, such as sampling period, admissible control inputs missing rate and exponential reaching rate of the consensus performance, are established. Finally, simulation examples are given to show the effectiveness of the proposed results. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
110. An adaptive PID neural network for complex nonlinear system control.
- Author
-
Kang, Jun, Meng, Wenjun, Abraham, Ajith, and Liu, Hongbo
- Subjects
- *
NONLINEAR systems , *PARTICLE swarm optimization , *PID controllers , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTATIONAL complexity , *SIMULATION methods & models - Abstract
Abstract: Usually it is difficult to solve the control problem of a complex nonlinear system. In this paper, we present an effective control method based on adaptive PID neural network and particle swarm optimization (PSO) algorithm. PSO algorithm is introduced to initialize the neural network for improving the convergent speed and preventing weights trapping into local optima. To adapt the initially uncertain and varying parameters in the control system, we introduce an improved gradient descent method to adjust the network parameters. The stability of our controller is analyzed according to the Lyapunov method. The simulation of complex nonlinear multiple-input and multiple-output (MIMO) system is presented with strong coupling. Empirical results illustrate that the proposed controller can obtain good precision with shorter time compared with the other considered methods. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
111. Path-planning research in radioactive environment based on particle swarm algorithm.
- Author
-
Liu, Yong-kuo, Li, Meng-kun, Xie, Chun-li, Peng, Min-jun, and Xie, Fei
- Subjects
- *
RADIOACTIVE substances , *PARTICLE swarm optimization , *ALGORITHMS , *NUCLEAR facilities , *SIMULATION methods & models - Abstract
Abstract: During the design, maintenance and decommissioning of nuclear facilities, nuclear radiation protection is an important part. In recent years, researchers have explored a lot of radiation protection approach, and some radiation protection approach have been applied in practice, such as visualization technique of radiation environment, path-planning method, robotics and etc., in these techniques the path-planning in radiation environment technology has become an important radiation protection measure. In this paper, we addressed a staff walking path-planning approach in radiation environment based on particle swarm algorithm and introduced some key technologies of path-planning in radiation environment. To obtain the optimal walking route to verify the operation of the proposed method, we carried out the simulation experiment in which dose and distance were as decision factors. The experiment results represented the probability and the effectiveness of path-planning in radiation environment based on particle swarm algorithm. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
112. On the usage of a time-frequency switch mode algorithm to efficiently simulate RF circuits.
- Author
-
Oliveira, Jorge F.
- Subjects
- *
ALGORITHMS , *DIGITAL signal processing , *PHASE coding , *SIMULATION methods & models , *COMPUTER simulation - Abstract
Wireless devices integrating more and more features in ever-smaller packages have become integral part of everyone's daily life. These systems have seen a continuous push to profit from digital signal processing techniques, in which signals are no longer traditional continuous amplitude and/or phase-modulated carriers, but became pulsed waveforms where the amplitude and/or phase are coded in some on–off digital scheme. This on–off scenario has been addressing new challenges to the circuit-level simulation of such systems. Having this in mind this paper aims to propose an innovative time–frequency simulation technique based on a switch mode algorithm, which is specially conceived for the efficient numerical simulation of RF circuits whose stimuli are intermittently turned on and off for unknown periods of time. At present, to simulate this category of circuits commercial tools have no choice than to make use of full conventional SPICE algorithms. This paper suggests an alternative way, which can result in significant speedups, without perceptible loss in accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
113. Process mining as support to simulation modeling: A hospital-based case study.
- Author
-
Tamburis, Oscar and Esposito, Christian
- Subjects
- *
PROCESS mining , *DISCRETE event simulation , *SIMULATION methods & models , *TIME perspective , *ALGORITHMS - Abstract
• Discrete event simulation models for improving healthcare processes performances. • Process mining techniques: gaining insights into business processes functioning. • Timely approaches required for modeling hospital processes from event logs. • Essence of process mining: a strongly connection between logs and process models. The purpose of this paper is to show how the knowledge of Process Mining techniques can provide a robust premise to build a Discrete Event Simulation (DES) model of a healthcare process. In order to analyze some specific processes of an ophthalmology ward of a large Italian hospital, ProM6 framework was implemented, which supports process mining techniques in form of plug-ins; the plug-ins process data, in form of Event Log, in order to extract information about the process. The DES model based on such information was run via a commercial tool, Simul8, which allows building sophisticated process models. A timely algorithm was then deployed to adapt the ProM6 information to the DES model. Following, a conformance analysis was conducted by comparing the original Event Log, and the Simulation tool data, taking into account the main case-related model attributes (routing probabilities profile, time perspective, and resources perspective). The paper aims at developing the line of inquiry for what concerns the deployment of approaches to set forth the link between Process mining and Simulation modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
114. Statistics variable kernel width for maximum correntropy criterion algorithm.
- Author
-
Zhou, Shuyong and Zhao, Haiquan
- Subjects
- *
ALGORITHMS , *MEAN square algorithms , *STATISTICS , *SYSTEM identification , *IDENTIFICATION , *SIMULATION methods & models - Abstract
• This paper summarizes several variable kernel width maximum correntropy criterion (MCC) algorithms, and discusses the basic principles of these algorithms. A close relationship between this algorithms and LMS algorithm is analyzed and established. • Then a new statistics variable kernel width MCC algorithm is proposed (SVKW-MCC) on the basis of previous variable kernel width algorithms. • SVKW-MCC algorithm is proposed to address the shortcomings of some well-known variable kernel width algorithm. The SVKW-MCC algorithm use statistics method to compute the kernel width and eliminates the abnormal errors caused by impulsive noise by statistical method. • The stability and steady-state mean-square performance of the proposed algorithm is analyzed and verified by experiments. Since the maximum correntropy criterion (MCC) algorithm with a constant kernel width leads to the trade-off problem between the convergence rate and steady-state misalignment, various adaptive kernel width MCC algorithms were derived to solve this problem. However, the superior performances of these algorithms depend mainly on specific data range, or have complicated calculation and parameter setting. Thus, this paper proposes a statistics variable kernel width MCC (SVKW-MCC) algorithm to overcome these problems. Specifically, the proposed algorithm calculates the mean and variances of the errors signal, and then the proposed algorithm removes these data that significantly deviate from the mean value of errors signal, moreover, the new mean and variance are recalculated after removing these abnormal data, subsequently, the new kernel width is calculated by the new variance and mean. Simulation results in system identification and echo cancellation scenarios show that the proposed algorithm outperforms the existing variable kernel width methods. Moreover, the stability and steady-state mean-square performance of the proposed algorithm is analyzed and verified by experiments. More importantly, the new method involves no extra free parameters and does not depend on the specific application data range, so the proposed algorithm has a very good application prospect. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
115. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations.
- Author
-
Jo, Sunhwan and Jiang, Wei
- Subjects
- *
MOLECULAR dynamics , *ALGORITHMS , *BIOPHYSICS , *FREE energy (Thermodynamics) , *SIMULATION methods & models - Abstract
Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 “hot region” is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding–unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein–ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation. Program summary Program title: REST2-NAMD Catalogue identifier: AEXX_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEXX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 240886 No. of bytes in distributed program, including test data, etc.: 8474342 Distribution format: tar.gz Programming language: C/C++, Tcl8.5. Computer: Not computer specific. Operating system: Any. Has the code been vectorized or parallelized?: Yes, MPI and/or PAMI parallelized depending on machine system software; ≥ 8192 cores used on IBM Blue Gene/Q Classification: 3. External routines: NAMD 2.10 ( http://www.ks.uiuc.edu/Research/namd/ ) Nature of problem: A generic implementation providing user-friendly API including input file preparation and performing replica exchange, and high frequency exchange attempt frequency with minimal communication overhead. Solution method: The rescaling procedure of force field parameters controlling REST2 is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. The implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Running time: 30 min–60 min [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
116. Enhancing performance and scalability of data transfer across sliding grid interfaces for time-accurate unsteady simulations of multistage turbomachinery flows.
- Author
-
Ganine, V., Amirante, D., and Hills, N.
- Subjects
- *
KNOWLEDGE transfer , *SIMULATION methods & models , *TURBOMACHINES , *FLUID dynamics , *ALGORITHMS - Abstract
High fidelity simulations of the flow phenomena around complex geometries for turbomachinery applications require fluid solvers to run on ever increasing processor counts. For fully unsteady predictions in rotor–stator systems most of CFD codes employ the sliding interface technique. However, the scalability and efficiency of current sliding grid parallel implementations are significantly constrained by the computation and communication imbalances. They are associated with data transfer across discrete non-matching interfaces. To prepare for the challenges at extreme scales in this paper we attempt to redesign the algorithm in such a way that it maintains the scalability of the original CFD code on static grids. In the proposed parallel implementation the cell containment search and interpolation workloads are balanced by employing a deterministic geometric decomposition on an intermediate “rendezvous” set of processes. Rapidly changing dynamic communication patterns induced by the grids relative motion are handled with a sparse communication protocol. The scaling behavior and performance of the developed technique are analyzed using realistic test cases on two different computing systems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
117. Application of the semi-Hertzian method to the prediction of wheel wear in heavy haul freight car.
- Author
-
Ding, Junjun, Li, Fu, Huang, Yunhua, Sun, Shulei, and Zhang, Lixia
- Subjects
- *
HERTZIAN contact stresses , *FREIGHT cars , *SIMULATION methods & models , *RAIL-trails , *ALGORITHMS , *TRAIL design & construction - Abstract
Abstract: In order to simulate the wheel wear behaviour of freight cars, the rail vehicle's multi-body dynamical model and wheel wear simulation programme were combined as a wheel wear simulation tool in this paper. Multi-body dynamical models of China's heavy haul freight cars such as the C80, C80H, C70 and C70H, were built in SIMPACK software and the track system was built based on the China's Ring-line. For the wheel/rail rolling contact, the semi-Hertzian method and FASTSIM algorithm were applied to solve the normal and tangential problem respectively. The shapes of worn wheel profiles in the simulation agree well with the field measurements, but wear rates from the simulation are larger than those found in field measurements. This discrepancy is contributed to two factors. The first factor is the wheel material. The CL60 wheel steel used in China's freight car, is harder than the BS11 wheel steel used in Zobory's wheel/rail wear experiments. The second factor is the impact of a material's elastic shear deformation on the slip velocity in contact patch. This impact will increase the wear rate and was considered in the present wear simulation. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
118. On the stochastic modeling of the IAF-PNLMS algorithm for complex and real correlated Gaussian input data.
- Author
-
Kuhn, Eduardo Vinicius, de Souza, Francisco das Chagas, Seara, Rui, and Morgan, Dennis R.
- Subjects
- *
STOCHASTIC processes , *ALGORITHMS , *GAUSSIAN processes , *LEAST squares , *MATHEMATICAL transformations , *SIMULATION methods & models - Abstract
Abstract: This paper presents a stochastic model for the individual-activation-factor proportionate normalized least-mean-square (IAF-PNLMS) adaptive algorithm operating under correlated Gaussian input data. The proposed approach uses the contragredient transformation to obtain an analytical solution for the normalized autocorrelation-like matrices arising from the model development. Model expressions describing the learning curve and the second-order moment of the weight-error vector for the IAF-PNLMS algorithm are derived taking into account the time-varying characteristic of the gain distribution matrix. As a consequence, the obtained model predicts very well the algorithm behavior for both transient and steady-state phases. Through simulation results, considering different operating scenarios, the accuracy of the proposed model is attested (via learning curve) for both complex- and real-valued input data. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
119. Affine projection mixed-norm algorithms for robust filtering.
- Author
-
Li, Guoliang, Wang, Gang, Dai, Yaru, Sun, Qi, Yang, Xinyue, and Zhang, Hongbin
- Subjects
- *
MATRIX inversion , *ALGORITHMS , *SYSTEM identification , *ADAPTIVE filters , *SIMULATION methods & models - Abstract
• A novel robust mixed-norm algorithm with affine projection (APRMNA) is proposed. • The APRMNA based on the generalized maximum correntropy criterion (APRMNA-GMC) is proposed. • The simplified APRMNA-GMC (S-APRMNA-GMC) is proposed. In this paper, a novel adaptive filtering algorithm combining both affine projection (AP) method and robust mixed-norm algorithm (RMNA) is proposed, which is called APRMNA. The AP method has the feature of fast convergence speed under colored inputs and the RMNA exhibits stable performance against noise interference. The proposed APRMNA algorithm not only combines the advantages of both AP and RMNA but also utilizes the ℓ 2 -norm constraint on the weight vector to avoid matrix inversion. Then, applying the generalized maximum correntropy (GMC) criterion to APRMNA, we also develop the APRMNA-GMC. Finally, a simplified version of the proposed APRMNA-GMC (S-APRMNA-GMC) is derived to reduce the computation complexity. Numerical simulations for system identification show that the proposed algorithms outperform other AP-type algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
120. A fractional-order adaptive regularization primal–dual algorithm for image denoising.
- Author
-
Tian, Dan, Xue, Dingyu, and Wang, Dianhui
- Subjects
- *
ADAPTIVE computing systems , *MATHEMATICAL regularization , *ALGORITHMS , *IMAGE denoising , *SIMULATION methods & models , *PARAMETERS (Statistics) - Abstract
This paper aims to develop a fractional-order model and a primal–dual algorithm for image denoising, where a regularization parameter can be adjusted adaptively according to Morozov discrepancy principle at each iteration to ensure that the denoised image retains in a specific set. In the light of saddle-point theory, the convergence of our proposed algorithm is guaranteed. Simulations with comparisons are carried out to demonstrate the effectiveness of our proposed algorithm for image denoising. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
121. Statistical significance of episodes with general partial orders.
- Author
-
Achar, Avinash and Sastry, P.S.
- Subjects
- *
STATISTICAL significance , *STREAMING video & television , *ALGORITHMS , *STATISTICAL hypothesis testing , *DATA mining , *SIMULATION methods & models - Abstract
Frequent episode discovery is one of the methods used for temporal pattern discovery in sequential data. An episode is a partially ordered set of nodes with each node associated with an event type. For more than a decade, algorithms existed for episode discovery only when the associated partial order is total (serial episode) or trivial (parallel episode). Recently, the literature has seen algorithms for discovering episodes with general partial orders. In frequent pattern mining, the threshold beyond which a pattern is inferred to be interesting is typically user-defined and arbitrary. One way of addressing this issue in the pattern mining literature has been based on the framework of statistical hypothesis testing. This paper presents a method of assessing statistical significance of episode patterns with general partial orders. A method is proposed to calculate thresholds, on the non-overlapped frequency, beyond which an episode pattern would be inferred to be statistically significant. The method is first explained for the case of injective episodes with general partial orders. An injective episode is one where event-types are not allowed to repeat. Later it is pointed out how the method can be extended to the class of all episodes. The significance threshold calculations for general partial order episodes proposed here also generalize the existing significance results for serial episodes. Through simulations studies, the usefulness of these statistical thresholds in pruning uninteresting patterns is illustrated. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
122. A PISO-like algorithm to simulate superfluid helium flow with the two-fluid model.
- Author
-
Soulaine, Cyprien, Quintard, Michel, Allain, Hervé, Baudouy, Bertrand, and Van Weelderen, Rob
- Subjects
- *
PARTICLE swarm optimization , *ALGORITHMS , *SIMULATION methods & models , *SUPERFLUIDITY , *FLUID dynamics - Abstract
This paper presents a segregated algorithm to solve numerically the superfluid helium (He II) equations using the two-fluid model. In order to validate the resulting code and illustrate its potential, different simulations have been performed. First, the flow through a capillary filled with He II with a heated area on one side is simulated and results are compared to analytical solutions in both Landau and Gorter–Mellink flow regimes. Then, transient heat transfer of a forced flow of He II is investigated. Finally, some two-dimensional simulations in a porous medium model are carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
123. Introduction to an optimization algorithm based on the chemical reactions.
- Author
-
Astudillo, L., Melin, P., and Castillo, O.
- Subjects
- *
MATHEMATICAL optimization , *ALGORITHMS , *CHEMICAL reactions , *SIMULATION methods & models , *MATHEMATICAL functions , *MATHEMATICAL models - Abstract
In this paper, a novel optimization method inspired by a paradigm from nature is introduced. Chemical reactions are used as a paradigm to propose an algorithm that can be considered as a general purpose optimization technique. The proposed algorithm is described in detail and then a set of typical complex benchmark functions is used to evaluate the performance of the algorithm. Simulation results show that the proposed optimization algorithm can outperform other methods in a set of benchmark functions. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
124. A computing resources prediction approach based on ensemble learning for complex system simulation in cloud environment.
- Author
-
Wang, Shuai, Zhu, Feng, Yao, Yiping, Tang, Wenjie, Xiao, Yuhao, and Xiong, Siqi
- Subjects
- *
FORECASTING , *FUZZY neural networks , *SIMULATION methods & models , *NAIVE Bayes classification , *RANDOM forest algorithms , *ALGORITHMS , *K-nearest neighbor classification - Abstract
Cloud computing provides a new infrastructure for the research of complex system simulation (CSS). However, insufficient computing resource allocation results in lower performance of a CSS application. On the other hand, excessive computing resource allocation will lead to the increase of simulation communication overhead and simulation synchronous computing. Therefore, accurate computing resource prediction is important to achieve optimal scheduling for CSS applications in the cloud environment. In this paper, a computing resource prediction approach based on ensemble learning has been proposed, which includes a cloud computing resource prediction framework and an intelligent ensemble algorithm. The framework with three–level architecture (simulation as a service, cloud computing resource predictor, and cloud computing resource pool) can provide computing resources to deal with the demands of the simulation applications. The intelligent ensemble algorithm uses an Accuracy and Relative Error-based Pruning algorithm to ensure the effective ensemble of base models (support vector machine, decision tree, and k-nearest neighbor). To improve the performance of the intelligent ensemble algorithm, a Feature Capability-based forward search Feature Selection algorithm is introduced to reduce redundancy between features. The experiments are presented to demonstrate that the intelligent ensemble algorithm can achieve higher accuracy by 4%-20% when compared with existing resource prediction models such as Regressive Ensemble Approach for Prediction, Bayesian, Linear Regression, Random Forest, and Fuzzy Neural Network. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
125. Investigations on the Application of the Downhill-Simplex-Algorithm to the Inverse Determination of Material Model Parameters for FE-Machining Simulations.
- Author
-
Hardt, M., Schraknepper, D., and Bergs, T.
- Subjects
- *
SIMULATION methods & models , *ALGORITHMS , *MATERIALS - Abstract
• The Downhill-Simplex-Algorithm is applicable to the inverse material parameter determination • Selection of the optimization parameters influences the determined results • Adaptive Downhill-Simplex-Algorithm reduces the computational effort • Uniqueness of Johnson-Cook material model parameters is not given for machining simulations Modeling the machining process by means of simulation techniques became more and more popular within the last decades. The increasing use of simulation techniques can be attributed to the enhancing computational performance. To accurately model the machining process, different input models are required. Among them, the material model has a crucial impact on the quality of the simulated results. To determine the underlying material model parameters, an inverse procedure has been established, where the material model parameters are adjusted within a chip formation simulation until a reasonable match between simulations and experiments is achieved. However, the procedure of the inverse determination requires high computational efforts and is not robust. This paper presents a novel approach to determine material model parameters inversely from FE-machining simulations by means of the Downhill-Simplex-Algorithm. An effect study was conducted to investigate the influence of the initial simplex, the boundary conditions, and the underlying parameters of the Downhill-Simplex-Algorithm. Further, the validated procedure has been extended to consider multiple cutting conditions and an adaptive step-size has been implemented into the algorithm. The results of the procedure were validated by re-identification of process observables from the target simulations and it was aimed to re-identify the material model parameters of the target simulations. Based on the algorithm, a robust and systematical method has been developed to determine material model parameters inversely from the machining process. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
126. Frequency- adaptive control of a three-phase single-stage grid-connected photovoltaic system under grid voltage sags.
- Author
-
Rey-Boué, Alexis B., Guerrero-Rodríguez, N.F., Stöckl, Johannes, and Strasser, Thomas I.
- Subjects
- *
ADAPTIVE control systems , *PHOTOVOLTAIC power generation , *REACTIVE power , *VECTOR control , *ALGORITHMS , *SIMULATION methods & models - Abstract
• Frequency-adaptive vector control of a grid-connected PV system. • MSOGI-FLL synchronization algorithm. • Low-Voltage Ride-Through (LVRT) capability with improved limitation of the amplitude of the three-phase inverter currents. • Controller Hardware-in-the-Loop (CHIL) simulation technique. The low-voltage ride-through service is carried out in this paper according to the voltage profile described by the IEC 61400-21 European normative when short-duration voltage sags happen, and some instantaneous reactive power is delivered to the grid in accordance with the Spanish grid code; the mandatory limitation of the amplitude of the three-phase inverter currents to its nominal value is carried out with a novel control strategy, in which a certain amount of instantaneous constant active power can also be delivered to the grid when small or moderate voltage sags happen. A Multiple second order generalized integrator frequency-locked loop synchronization algorithm is employed in order to estimate the system frequency without harmonic distortions, as well as to output the positive- and the negative- sequence of the αβ quantities of the three-phase grid voltages when balanced and unbalanced voltage sags happen in a frequency- adaptive scheme. The current control is carried out in the stationary reference frame, which guarantees the cancellation of the harmonic distortions in the utility grid currents using a Harmonic compensation structure, and the implementation of a constant active power control in order to protect the DC link capacitor from thermal stresses avoiding the appearance of large harmonic distortions at twice the fundamental frequency in the DC link voltage. A case study of a three-phase single-stage grid-connected PV system with a maximum apparent power about 500 kVA is tested with several simulations using MATLAB/SIMULINK firstly, and secondly, with some experiments using the Controller hardware-in-the-loop (CHIL) simulation technique for several types of voltage sags in order to do the final validation of the control algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.