765 results
Search Results
2. Application of Computer Simulation to the Anonymization of Personal Data: Synthesis-Based Anonymization Model and Algorithm.
- Author
-
Borisov, A. V., Bosov, A. V., and Ivanov, A. V.
- Subjects
PERSONALLY identifiable information ,APPLICATION software ,COMPUTER simulation ,ALGORITHMS ,ELECTRONIC data processing ,STATISTICAL sampling - Abstract
This paper describes the second part of our study devoted to automated anonymization of personal data. The overview and analysis of research prospects is supplemented with a practical result. An anonymization model is proposed, which reduces anonymization of personal data to manipulation of samples of random elements of different types. The key idea of our approach to anonymization of personal data with preservation of their usefulness is the use of the synthesis method, i.e., the complete replacement of all non-anonymized data with synthetic values. In the proposed model, a set of element types is selected, for which corresponding synthesys templates are proposed. The set of templates constitutes a synthesis-based anonymization algorithm. Technically, each template is based on a well-known statistical tool: frequency estimates of probabilities, Parzen–Rosenblatt kernel density estimates, statistical means, and covariances. The proposed approach is illustrated by a simple example from civil aviation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Deviation quantification of the intersecting curve weld seam based on non-ideal models.
- Author
-
Liu, Yan, Shi, Lei, and Tian, Xincheng
- Subjects
PIPELINE welding ,ALGORITHMS ,COMPUTER simulation ,WELDING - Abstract
When the deviation between the actual pipelines and the ideal models cannot be neglected, the intersecting curve weld seam based on non-ideal models should be specially studied. This paper will introduce a novel method to quantify the deviation of the intersecting curve weld seam based on the non-ideal models. Weld location is a technical means for obtaining the actual position of weld seam, which may be used to obtain the location information of some key points on the intersecting curve. Combined with the information of weld location and the inherent characteristics of the intersecting curves, this paper analyzes the experimental results of actual intersecting curve welding, and all these works laid the foundation for the proposed algorithm. First of all, on the basis of previous studies, this paper introduces a kind of model for intersecting pipelines, which can cover most of the ways of coherence. Secondly, based on this model, the factors which may lead to the deviation of the theoretical intersecting curve from the actual intersecting curve are analyzed. Generally, there may be some connections or coupling between the sources of deviation. Against the above problem, the paper gives a solution to model each source of deviation and discuss the relationship among them, and then makes use of the information of weld location to quantify the main sources of deviation one by one, especially the quantification of the main pipes ovality. Finally, the correctness and flexibility of the algorithm are verified by the MATLAB simulation. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Sample complexity of variance-reduced policy gradient: weaker assumptions and lower bounds.
- Author
-
Paczolay, Gabor, Papini, Matteo, Metelli, Alberto Maria, Harmati, Istvan, and Restelli, Marcello
- Subjects
REINFORCEMENT learning ,COMPUTER simulation ,ALGORITHMS - Abstract
Several variance-reduced versions of REINFORCE based on importance sampling achieve an improved O (ϵ - 3) sample complexity to find an ϵ -stationary point, under an unrealistic assumption on the variance of the importance weights. In this paper, we propose the Defensive Policy Gradient (DEF-PG) algorithm, based on defensive importance sampling, achieving the same result without any assumption on the variance of the importance weights. We also show that this is not improvable by establishing a matching Ω (ϵ - 3) lower bound, and that REINFORCE with its O (ϵ - 4) sample complexity is actually optimal under weaker assumptions on the policy class. Numerical simulations show promising results for the proposed technique compared to similar algorithms based on vanilla importance sampling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Study on centroid type-reduction of general type-2 fuzzy logic systems with sensible beginning weighted enhanced Karnik–Mendel algorithms.
- Author
-
Chen, Yang
- Subjects
FUZZY logic ,FUZZY systems ,ALGORITHMS ,NUMERICAL integration ,CENTROID ,COMPUTER simulation - Abstract
General type-2 fuzzy logic systems have received wide concerns in current academic subject. Type-reduction is the kernel module for the systems. This paper shows the interpretations for the beginning of Karnik–Mendel (KM) algorithms. According to the famous numerical integration technique, the weighting approaches of enhanced Karnik–Mendel (EKM) algorithms are put forward. Then, the sensible beginning weighted enhanced Karnik–Mendel (SBWEKM) algorithms are put forward to perform the centroid type-reduction. Compared with the EKM algorithms, WEKM algorithms and SBEKM algorithms, this approach helps to improve both the absolute errors and convergence speeds as shown in four computer simulation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. System simulation of computer image recognition technology application by using improved neural network algorithm.
- Author
-
Wang, Xin
- Subjects
COMPUTER simulation ,OPTIMIZATION algorithms ,COMPUTER systems ,ALGORITHMS ,SIMULATION methods & models ,IMAGE recognition (Computer vision) - Abstract
Digital image technology is penetrating into various fields of people's life, and it has been very mature and can effectively store and transmit data. Moreover, there are still various researches on image recognition, the core of this technology. The algorithm is mainly based on computer technology to obtain the target image for different scene categories, thus completely replacing the traditional classification form. Because of the limitations of traditional identification technology, there are some problems in the actual use. It does not depend on the prior knowledge requirements and can carry out complex feature space division. In this paper, an image recognition computer system is established by introducing an improved neural network algorithm. The algorithm is designed and tested, and the results show it has lower image recognition error rate. Subsequently, this research result is applied to the actual scene for testing. The test results show that the improved neural network optimization algorithm can make the extracted features more accurately expressed in the image processing, which is more effective than the traditional algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Majority networks and local consensus algorithm.
- Author
-
Goles, Eric, Medina, Pablo, and Santiváñez, Julio
- Subjects
DISTRIBUTED algorithms ,ALGORITHMS ,COMPUTER simulation - Abstract
In this paper, we study consensus behavior based on the local application of the majority consensus algorithm (a generalization of the majority rule) over four-connected bi-dimensional networks. In this context, we characterize theoretically every four-vicinity network in its capacity to reach consensus (every individual at the same opinion) for any initial configuration of binary opinions. Theoretically, we determine all regular grids with four neighbors in which consensus is reached and in which ones not. In addition, in those instances in which consensus is not reached, we characterize statistically the proportion of configurations that reach spurious fixed points from an ensemble of random initial configurations. Using numerical simulations, we also analyze two observables of the system to characterize the algorithm: (1) the quality of the achieved consensus, that is if it respects the initial majority of the network; and (2) the consensus time, measured as the average amount of steps to reach convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Few-detail image encryption algorithm based on diffusion and confusion using Henon and Baker chaotic maps.
- Author
-
Naeem, Ensherah A., Joshi, Anand B., Kumar, Dhanesh, and El-Samie, Fathi E. Abd
- Subjects
- *
IMAGE encryption , *ALGORITHMS , *STATISTICS , *STATISTICAL correlation , *COMPUTER simulation - Abstract
This paper presents a solution for the security of few-detail images in real-time applications over open or unsecure networks depending on diffusion and confusion operations. The diffusion and confusion operations are both performed based on Henon and Baker chaotic maps. XOR and permutation operations are used to allow diffusion and confusion in the algorithm. In the proposed algorithm, a high-detail image is used as a key. The computer simulation results and security analysis are given to ensure the validity and strength of the proposed algorithm. In the security analysis, we have performed key-space analysis, cropping attack analysis, noise attack analysis, and differential attack analysis. Some statistical analyses based on entropy, histogram, and correlation coefficient estimation are also given to check the strength of the presented algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Study on algorithms of low SNR inversion of T spectrum in NMR.
- Author
-
Lin, Feng, Wang, Zhu-Wen, Li, Jing-Ye, Zhang, Xue-Ang, and Jiang, Yu-Long
- Subjects
ALGORITHMS ,SIGNAL-to-noise ratio ,COMPUTER simulation ,NUCLEAR magnetic resonance ,NUMERICAL analysis - Abstract
The method of regularization factor selection determines stability and accuracy of the regularization method. A formula of regularization factor was proposed by analyzing the relationship between the improved SVD and regularization method. The improved SVD algorithm and regularization method could adapt to low SNR. The regularization method is better than the improved SVD in the case that SNR is below 30 and the improved SVD is better than the regularization method when SNR is higher than 30. The regularization method with the regularization factor proposed in this paper can be better applied into low SNR (5
- Published
- 2011
- Full Text
- View/download PDF
10. Coevolutionary dynamics of a variant of the cyclic Lotka–Volterra model with three-agent interactions.
- Author
-
Palombi, Filippo, Ferriani, Stefano, and Toti, Simona
- Subjects
HOPF bifurcations ,ALGORITHMS ,FORECASTING ,STATISTICAL physics ,COMPUTER simulation - Abstract
We study a variant of the cyclic Lotka–Volterra model with three-agent interactions. Inspired by a multiplayer variation of the Rock–Paper–Scissors game, the model describes an ideal ecosystem in which cyclic competition among three species develops through cooperative predation. Its rate equations in a well-mixed environment display a degenerate Hopf bifurcation, occurring as reactions involving two predators plus one prey have the same rate as reactions involving two prey plus one predator. We estimate the magnitude of the stochastic noise at the bifurcation point, where finite size effects turn neutrally stable orbits into erratically diverging trajectories. In particular, we compare analytic predictions for the extinction probability, derived in the Fokker–Planck approximation, with numerical simulations based on the Gillespie stochastic algorithm. We then extend the analysis of the phase portrait to heterogeneous rates. In a well-mixed environment, we observe a continuum of degenerate Hopf bifurcations, generalizing the above one. Neutral stability ensues from a complex equilibrium between different reactions. Remarkably, on a two-dimensional lattice, all bifurcations disappear as a consequence of the spatial locality of the interactions. In the second part of the paper, we investigate the effects of mobility in a lattice metapopulation model with patches hosting several agents. We find that strategies propagate along the arms of rotating spirals, as they usually do in models of cyclic dominance. We observe propagation instabilities in the regime of large wavelengths. We also examine three-agent interactions inducing nonlinear diffusion."Three at play. That'll be the day!" (a child in Wings of desire [W. Wenders, 1987]) [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Linear regression for uplift modeling.
- Author
-
Rudaś, Krzysztof and Jaroszewicz, Szymon
- Subjects
REGRESSION analysis ,STATISTICAL models ,MACHINE learning ,COMPUTER simulation ,MARKETING ,ALGORITHMS - Abstract
The purpose of statistical modeling is to select targets for some action, such as a medical treatment or a marketing campaign. Unfortunately, classical machine learning algorithms are not well suited to this task since they predict the results after the action, and not its causal impact. The answer to this problem is uplift modeling, which, in addition to the usual training set containing objects on which the action was taken, uses an additional control group of objects not subjected to it. The predicted true effect of the action on a given individual is modeled as the difference between responses in both groups. This paper analyzes two uplift modeling approaches to linear regression, one based on the use of two separate models and the other based on target variable transformation. Adapting the second estimator to the problem of regression is one of the contributions of the paper. We identify the situations when each model performs best and, contrary to several claims in the literature, show that the double model approach has favorable theoretical properties and often performs well in practice. Finally, based on our analysis we propose a third model which combines the benefits of both approaches and seems to be the model of choice for uplift linear regression. Experimental analysis confirms our theoretical results on both simulated and real data, clearly demonstrating good performance of the double model and the advantages of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Data-driven prediction model for adjusting burden distribution matrix of blast furnace based on improved multilayer extreme learning machine.
- Author
-
Su, Xiaoli, Zhang, Sen, Yin, Yixin, Liu, Yanan, and Xiao, Wendong
- Subjects
DISTRIBUTION (Probability theory) ,MACHINE learning ,BLAST furnaces ,ALGORITHMS ,PREDICTION models ,COMPUTER simulation - Abstract
Reasonable burden distribution matrix is one of important requirements that can realize low consumption, high efficiency, high quality and long campaign life of the blast furnace. This paper proposes a data-driven prediction model of adjusting the burden distribution matrix based on the improved multilayer extreme learning machine (ML-ELM) algorithm. The improved ML-ELM algorithm is based on our previously modified ML-ELM algorithm (named as PLS-ML-ELM) and the ensemble model. It is named as EPLS-ML-ELM. The PLS-ML-ELM algorithm uses the partial least square (PLS) method to improve the algebraic property of the last hidden layer output matrix for the ML-ELM algorithm. However, the PLS-ML-ELM algorithm may have different results in different trails of simulations. The ensemble model can overcome this problem. Moreover, it can improve the generalization performance. Hence, the EPLS-ML-ELM algorithm is consisted of several PLS-ML-ELMs. The real blast furnace data are used to testify the data-driven prediction model. Compared with other prediction models which are based on the SVM algorithm, the ELM algorithm, the ML-ELM algorithm and the PLS-ML-ELM algorithm, the simulation results demonstrate that the data-driven prediction model based on the EPLS-ML-ELM algorithm has better prediction accuracy and generalization performance. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. An improved distributed compressed video sensing scheme in reconstruction algorithm.
- Author
-
Zheng, Shuai, Chen, Jian, and Kuo, Yonghong
- Subjects
VIDEO compression ,ALGORITHMS ,WIRELESS sensor networks ,ENCODING ,COMPUTER simulation - Abstract
Under the new video application scene of resource-constrained coding side such as wireless sensor networks, compressed sensing technique provides the possibility to solve the high-complexity problem of encoder because of its highly efficient compression encoding performance. Distributed compressed video sensing system provides a solution to satisfy the requirements of low encoder complexity and high coding efficiency in the new scene. This paper proposes a new distributed compressed video sensing scheme, which effectively improves the reconstruction quality of non-key frames. An auxiliary iterative termination decision algorithm is proposed to improve the performance of key frames initial reconstruction. An adaptive weights prediction algorithm is put forward to reduce the overall complexity. Besides, this paper proposes a position-based cross reconstruction algorithm to improve the decoded quality of the middle non-key frames in the group of pictures. The simulation results show that the proposed scheme effectively improves the overall performance of the distributed compressed video sensing system especially for high motion sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. A Computationally Efficient Received Signal Strength Based Localization Algorithm in Closed-Form for Wireless Sensor Network.
- Author
-
Zhang, Xinrong, Xiong, Weili, and Xu, Baoguo
- Subjects
WIRELESS communications ,ERRORS ,LEAST squares ,ALGORITHMS ,COMPUTER simulation - Abstract
Ranging error is known to degrade significantly the target node localization accuracy. This paper investigates the use of computationally efficient positioning solution of least square (LS) in closed-form, to reduce localization accuracy loss caused by ranging error. For range-based node localization, the LS solution based on least square criterion has been confirmed to exhibit capability of optimum estimation but extensively achieve at a very complex calculation. In this paper we consider the problem how to acquire such LS solution provided with estimation performance at low complex calculation. In this paper, we use the Gauss noise model and use the weighted least squares criterion and the effective calculation method to solve the linearized equation derived from the RSS measurement, and put forward a new approach to estimate the performance of the target node location estimation. Based on the Fisher information matrix, the Cramér-Rao lower bound of target position estimation is derived based on received signal strength. We obviously indicate that the proposed algorithm can approximately achieve the LS solution in estimation performance at a markedly low complex calculation. Simulations are performed to show the improvement of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. A coalition formation game based relay selection scheme for cooperative cognitive radio networks.
- Author
-
Huo, Yan, Liu, Lingling, Ma, Liran, Zhou, Wei, Cheng, Xiuzhen, Jing, Tao, and Jiang, Xiaobing
- Subjects
COGNITIVE radio ,WIRELESS cooperative communication ,NETWORK performance ,COMPUTER simulation ,UTILITY functions ,ALGORITHMS - Abstract
In a cognitive radio network, cooperative communications between a primary user (PU) and a second user (SU) may be able to significantly improve the spectrum utilization, and thus, the network performance. To be specific, the PU can select a number of SUs as its relays to cooperatively transmit its data. In turn, these relays can be granted access to the licensed channel of the PU to transmit their data. In this paper, an effective cooperation strategy for SUs is presented. We formulate the problem of cooperative relay selection as a coalition formation game, and develop a utility function based on the game. The utility function considers various factors such as transmission power and noise level. With the utility function, a distributed coalition formation algorithm is proposed, which can be used by SUs to decide whether to join or leave a coalition. Such a decision is based on whether it can increase the maximal coalition utility value. We rigorously prove that our proposed coalition formation algorithm can terminate and reach a stable state. Finally, this paper demonstrates that the proposed scheme is able to enhance the network throughput via a simulation study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. A data transmission algorithm for distributed computing system based on maximum flow.
- Author
-
Zhang, Xiaolu, Jiang, Jiafu, Zhang, Xiaotong, and Wang, Xuan
- Subjects
DISTRIBUTED computing ,BANDWIDTHS ,DATA transmission systems ,COMPUTER simulation ,ALGORITHMS ,MANAGEMENT - Abstract
Data skew can lead to load imbalance and longer computation time in the distributed computing system. To avoid data skew and reduce the data computation time, it is necessary to transmit the data to appropriate machines, this may however take too much network resources. How to balance the computational resources and the network resources is a problem. In this paper, we introduce a computation model called distributed two-phase model, in which the process of a task can be divided into two independent phases: data transmission and data computation. Suppose an upper bound of relative computation time is given, we show how to schedule data transmission with minimum resources, such as data transmission time and occupied bandwidth, to meet the demand. In this paper, we present a novel algorithm to minimize data transmission time and network bandwidth usage in the data transmission phase, with the conditions that an upper bound of relative computation time of data computation phase is given. Moreover, the number of nodes that participate in data computation phase is also reduced, in this way, the computational resources are saved. The simulation results show that the occupied bandwidth can be reduced effectively (about 70 %) in the situation of large-scale data sets and large number of nodes. Our algorithm is also shown to be available in replication situation. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. A robust and efficient estimation method for partially nonlinear models via a new MM algorithm.
- Author
-
Jiang, Yunlu, Tian, Guo-Liang, and Fei, Yu
- Subjects
ASYMPTOTIC normality ,QUANTILE regression ,STATISTICAL models ,QUANTILES ,ALGORITHMS ,COMPUTER simulation ,DATA analysis - Abstract
When the observed data set contains outliers, it is well known that the classical least squares method is not robust. To overcome this difficulty, Wang et al. (J Am Stat Assoc 108(502): 632–643, 2013) proposed a robust variable selection method by using the exponential squared loss (ESL) function with a tuning parameter. Although many important statistical models are investigated, to date, in the presence of outliers there is no paper to study the partially nonlinear model by using the ESL function. To fill in this gap, in this paper, we propose a robust and efficient estimation method for the partially nonlinear model based on the ESL function. Under certain conditions, we have shown that the proposed estimators can achieve the best convergence rates. Next, the asymptotic normality of the proposed estimators is established. In addition, we develop a new minorization–maximization algorithm to calculate the estimates for both non-parametric and parametric parts and present a procedure for deriving initial values. Finally, we provide a data-driven approach to select the tuning parameters. Numerical simulations and a real data analysis are used to illustrate that when there are outliers, the proposed ESL method is more robust and efficient for partially nonlinear models than the existing linear approximation method and the composite quantile regression method. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Efficient algorithm for full-state quantum circuit simulation with DD compression while maintaining accuracy.
- Author
-
Song, Yuhong, Sha, Edwin Hsing-Mean, Zhuge, Qingfeng, Xu, Rui, and Wang, Han
- Subjects
- *
QUANTUM communication , *ALGORITHMS , *SHORT-term memory , *QUBITS , *QUANTUM wells , *COMPUTER simulation - Abstract
With the development of noisy intermediate-scale quantum machines, quantum processors show their supremacy in specific applications. To better understand the quantum behavior and verify larger quantum bit (qubit) algorithms, simulation on classical computers becomes crucial. However, as the simulated number of qubits increases, the full-state simulation suffers exponential memory increment for state vector storing. In order to compress the state vector, some existing works reduce the memory by data encoding compressors. Nevertheless, the memory requirement remains massive. Meanwhile, others utilize compact decision diagrams (DD) to represent the state vector, which only demands linear memory size. However, the existing DD-based simulation algorithm possesses many redundant calculations that require further exploration. Besides, the traditional normalization-based nodes merging method of DD amplifies the side influences of approximate error. Therefore, to tackle the above challenges, in this paper, we first fully explore the redundancies in the recursive-based DD simulation (RecurSim) algorithm. Inspired by the regularities of the quantum circuit model, a scale-based simulation (ScaleSim) algorithm is proposed, which removes plenty of unnecessary computations. Furthermore, to eliminate the influences of approximate error, we propose a new pre-check DD building method, namely PCB, which maintains the accuracy of DD representation and produces more memory saving. Comprehensive experiments show that our method achieves up to 24124.2 × acceleration and 3.2 × 10 7 × memory reduction than traditional DD-based methods on quantum algorithms while maintaining the representation accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Iterative QR Decomposition-Based Parallel Diversity Noncoherent Detection Algorithm.
- Author
-
Wang, Jieling, Zhou, Bin, and Zhao, Mao
- Subjects
PARALLEL algorithms ,MULTIUSER computer systems ,ALGORITHMS ,SUPPLY & demand ,COMPUTER simulation ,PSYCHOLOGICAL feedback - Abstract
Non-orthogonal multipulse modulation (NMM) has been proven to be with high efficiency in supplying diversity compared with conventional direct sequence spreading system, and the multiuser system constructed by NMM is capable of exploiting both capacity and diversity. However, as conventional code division multiplexing access (CDMA) systems, multi-access interference (MAI) also appears in the NMM-directed multiuser systems, so to improve the system performance, MAI has to be mitigated. Aiming at the MAI in the NMM multiuser systems, QR decomposition-based noncoherent multiuser receiver has been regarded as an effective method for the non-orthogonal multipulse modulation systems. Based on that, we in this paper put forward an iterative decision feedback scheme to pursue the diversity, where two different kinds of interference cancellation algorithms are put forward alternately according to the upper and lower triangular matrices obtained by QR decompositions, respectively. To optimize the detecting property, the criterion of Maximum Rule and the Average Rule are demonstrated and compared by numerical simulations. Finally, a parallel implementation structure is further proposed, which can reduce half of the processing delay for the overall algorithm, meanwhile, the approximate spectral efficiency of the proposed algorithm is presented. Computer simulations are employed to testify the proposed schemes, and the results show that the SNR gains of 1 dB and 2 dB can be obtained by our iterative decision feedback schemes with Maximum Rule and Average Rule, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Multidimensional Blind Separation Using Higher-Order Statistics: Application to Non-Cooperative STBC Systems.
- Author
-
Luo, Minggang, Li, Liping, Qian, Guobing, and Liao, Hongshu
- Subjects
SPACE-time block codes ,INDEPENDENT component analysis ,ALGORITHMS ,STOCHASTIC convergence ,COMPUTER simulation - Abstract
Blindly separating the intercepted signals is a challenging problem in non-cooperative multiple input multiple output systems in association with space-time block code (STBC) where channel state information and coding matrix are unavailable. To our knowledge, there is no report on dealing with this problem in literature. In this paper, the STBC systems are represented with an independent component analysis (ICA) model by merging the channel and coding matrices as virtual channel matrix. Analysis shows that the source signals are of group-wise independence and the condition of mutual independence can not be satisfied for ordinary ICA algorithms when specific modulations are employed. A new multidimensional ICA algorithm is proposed to separate the intercepted signals in this case by jointly block-diagonalizing (JBD) the cumulant matrices. In this paper, JBD is achieved by a 2-step optimization algorithm and a contrast function is derived from the JBD criterion to remove the additional permutation ambiguity with explicit mathematical explanations. The convergence of the new method is guaranteed. Compared with the ICA-based channel estimation methods, simulations show that the new algorithm, which does not introduce additional ambiguities, achieves better performance with faster convergence in a non-cooperative scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
21. Active learning of user's preferences estimation towards a personalized 3D navigation of geo-referenced scenes.
- Author
-
Yiakoumettis, Christos, Doulamis, Nikolaos, Miaoulis, Georgios, and Ghazanfarpour, Djamchid
- Subjects
ACTIVE learning ,PARAMETER estimation ,GEOGRAPHIC information systems ,ALGORITHMS ,COMPUTATIONAL complexity ,VIRTUAL tourism ,ROBOTIC path planning ,COMPUTER simulation - Abstract
The current technological evolutions enter 3D geo-informatics into their digital age, enabling new potential applications in the field of virtual tourism, pleasure, entertainment and cultural heritage. It is argued that 3D information provides the natural way of navigation. However, personalization is a key aspect in a navigation system, since a route that incorporates user preferences is ultimately more suitable than the route with the shortest distance or travel time. Usually, user's preferences are expressed as a set of weights that regulate the degree of importance of the scene metadata on the route selection process. These weights, however, are defined by the users, setting the complexity to the user's side, which makes personalization an arduous task. In this paper, we propose an alternative approach in which metadata weights are estimated implicitly and transparently to the users, transferring the complexity to the system side. This is achieved by introducing a relevance feedback on-line learning strategy which automatically adjusts metadata weights by exploiting information fed back to the system about the relevance of user's preferences judgments given in a form of pair-wise comparisons. Practically implementing a relevance feedback algorithm presents the limitation that several pair-wise comparisons (samples) are required to converge to a set of reliable metadata weights. For this reason, we propose in this paper a weight rectification strategy that improves weight estimation by exploiting metadata interrelations defined through an ontology. In the sequel, a genetic optimization algorithm is incorporated to select the most user preferred routes based on a multi-criteria minimization approach. To increase the degree of personalization in 3D navigation, we have also introduced an efficient algorithm for estimating 3D trajectories around objects of interest by merging best selected 2D projected views that contain faces which are mostly preferred by the users. We have conducted simulations and comparisons with other approaches either in the field of on-line learning or route selection using objective metrics in terms of precision and recall values. The results indicate that our system yields on average a 13.76 % improvement of precision as regards the learning strategy and an improvement of 8.75 % regarding route selection. In addition, we conclude that the ontology driven weight rectification strategy can reduce the number of samples (pair-wise comparisons) required of 76 % to achieve the same precision. Qualitative comparisons have been also performed using a use case route scenario in the city of Athens. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
22. A quantum image encryption algorithm based on the Feistel structure.
- Author
-
Guo, Limei, Du, Hongwei, and Huang, Duan
- Subjects
BLOCK ciphers ,IMAGE encryption ,QUANTUM computers ,ALGORITHMS ,IMAGE processing ,NUMERICAL analysis ,COMPUTER simulation - Abstract
Due to the fact that many classical numerical methods have not yet mature quantum counterparts, quantum circuit design is very important in quantum image processing. In this paper, using the novel enhanced quantum representation (NEQR) model, an image encryption algorithm based on Feistel structure is carried out in quantum computer by giving the encryption quantum circuits. First, the modified Feistel structure for image encryption is proposed. It is a 128-bit block cipher and requires 16-bit subkeys to encrypt the image, and it is a mixture of Feistel and substitution–permutation network; then, the detailed quantum circuits design of the encryption algorithm are given. Through numerical simulation and analysis, it is verified that the proposed quantum image encryption algorithm is effective and can resist statistical attacks effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Data-driven spatial branch-and-bound algorithms for box-constrained simulation-based optimization.
- Author
-
Zhai, Jianyuan and Boukouvala, Fani
- Subjects
MATHEMATICAL optimization ,BENCHMARK problems (Computer science) ,QUANTITATIVE research ,CONSTRAINED optimization ,ALGORITHMS ,COMPUTER simulation - Abstract
The ability to use complex computer simulations in quantitative analysis and decision-making is highly desired in science and engineering, at the same rate as computation capabilities and first-principle knowledge advance. Due to the complexity of simulation models, direct embedding of equation-based optimization solvers may be impractical and data-driven optimization techniques are often needed. In this work, we present a novel data-driven spatial branch-and-bound algorithm for simulation-based optimization problems with box constraints, aiming for consistent globally convergent solutions. The main contribution of this paper is the introduction of the concept data-driven convex underestimators of data and surrogate functions, which are employed within a spatial branch-and-bound algorithm. The algorithm is showcased by an illustrative example and is then extensively studied via computational experiments on a large set of benchmark problems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Flow location (FlowLoc) problems: dynamic network flows and location models for evacuation planning.
- Author
-
Hamacher, Horst, Heller, Stephanie, and Rupp, Benjamin
- Subjects
COMPUTER simulation ,ALGORITHMS ,INTEGER programming ,PARAMETRIC modeling ,HEURISTIC programming ,MATROIDS - Abstract
In this paper we combine two modeling tools to predict and evaluate evacuation plans: (dynamic) network flows and locational analysis. We present three exact algorithms to solve the single facility version 1-FlowLoc of this problem and compare their running times. After proving the $\mathcal{NP}$-completeness of the multi facility q-FlowLoc problem, a mixed integer programming formulation and a heuristic for q-FlowLoc are proposed. The paper is concluded by discussing some generalizations of the FlowLoc problem, such as the multi-terminal problem, interdiction problem, the parametric problem and the generalization of the FlowLoc problem to matroids. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
25. Filling 2D domains with disks using templates for discrete element model generation.
- Author
-
Zsaki, Attila
- Subjects
FILLER materials ,DISCRETE systems ,COMPUTER simulation ,COMPUTATIONAL geometry ,ALGORITHMS - Abstract
The representation of discontinuum in discrete element simulations requires an assemblage of elements. Although in 2D the simplest and most often used element type is a disk, filling domains with disks of variable radii is not a trivial task. The initial assemblage of elements requires re-generation for different boundary geometries, thus discarding the effort and time required to generate it. This paper proposes the development of a re-useable library of element assemblages that can be applied to any problem with minimum effort to drastically reduce model generation times. Although any qualifying element assemblage can be used in the interior field of a problem, the problem-specific regions close to the model boundaries need to be resolved. This paper, building on a previously developed and published algorithm, and expanding with new features, presents a fast and simple method to accomplish the boundary conformance. The applicability of the proposed method is demonstrated on a draw point chute. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
26. On deriving test suites for nondeterministic finite state machines with time-outs.
- Author
-
Shabaldina, N. and Galimullin, R.
- Subjects
ALGORITHMS ,FINITE state machines ,COMPUTER simulation ,COMPUTER science ,SCIENCE ,ALGEBRA - Abstract
In the paper, the non-separability relation for finite state machines with time-outs is studied. A specific feature of such machines is integer-valued delays, or time-outs, which determine how long the finite state machine will stay in one or another state if there are no input actions. If the time-out is over and no input symbol has been applied, then the TFSM state is changed according to the transition under time delay. In the paper, an algorithm for constructing a separating sequence for such finite state machines is presented. Here, the separating sequence is a timed input sequence for which sets of input sequences of the TFSM do not intersect; hence, it is sufficient to apply the separating sequence once in order to distinguish the TFSMs by their output reactions. This algorithm underlies the algorithm for construction of test suites with respect to non-separability relation in the case where the fault domain is specified by means of a mutation machine. Test suite derivation with respect to non-separability relation by way of 'TFSM to FSM' transformation is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
27. Properties of the DGS-Auction Algorithm.
- Author
-
Andersson, Tommy and Andersson, Christer
- Subjects
AUCTIONS ,ALGORITHMS ,GRAPH theory ,MEASURE theory ,PERFORMANCE evaluation ,ECONOMIC convergence ,COMPUTER simulation - Abstract
This paper investigates algorithmic properties and overall performance of the exact auction algorithm in Demange, Gale and Sotomayor (J. Polit. Economy 94: 863-872, 1986) or DGS for short. This task is achieved by interpreting DGS as a graph and by conducting a large number of computer simulations. The crucial step in DGS is when the auctioneer selects a so-called minimal overdemanded set of items because the specific selection may affect a number of performance measures such as the number of iterations and the ratio of elicited preferences. The computational results show that (i) DGS graphs are typically large even for relatively small numbers of bidders and items, (ii) DGS converges slowly and (iii) DGS performs well in terms of preference elicitation. The paper also demonstrates that the modification to DGS based on the Ford-Fulkerson algorithm outperforms all investigated rules for selecting a minimal overdemanded set of items in DGS both in terms of termination speed and preference elicitation. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
28. Grouping and Selecting Singular Spectrum Analysis Components for Denoising Via Empirical Mode Decomposition Approach.
- Author
-
Lin, Peiru, Kuang, Weichao, Liu, Yuwei, and Ling, Bingo Wing-Kuen
- Subjects
HILBERT-Huang transform ,ALGORITHMS ,COMPUTER simulation ,SIGNAL-to-noise ratio ,SIGNAL denoising - Abstract
This paper proposes a threshold-free method for grouping and selecting the singular spectrum analysis (SSA) components for performing the signal denoising via the empirical mode decomposition (EMD) approach. First, the total number of the groups of the SSA components is selected to be the same as the total number of the intrinsic mode functions (IMFs) of the signal. The SSA components are assigned to the group where the absolute correlation coefficient between the IMF and the SSA component is the highest. This grouping method is implemented using the matching pursuit algorithm. Then, the groups of the SSA components are selected based on the selection criterion used in an existing EMD denoising method. As the EMD denoising approach is a time-domain approach and the SSA components are represented in the transformed domain, our proposed method exploits both the time-domain and the transformed-domain information for performing the denoising. Computer numerical simulation results show that the signal-to-noise ratios of common practical signals denoised by our proposed method are higher than those denoised by the existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
29. A hierarchical learning approach to anti-jamming channel selection strategies.
- Author
-
Yao, Fuqiang, Jia, Luliang, Sun, Youming, Xu, Yuhua, Feng, Shuo, and Zhu, Yonggang
- Subjects
CELL phone jamming ,MACHINE learning ,FEATURE selection ,ALGORITHMS ,COMPUTER simulation ,NETWORK performance - Abstract
This paper investigates the channel selection problem for anti-jamming defense in an adversarial environment. In our work, we simultaneously consider malicious jamming and co-channel interference among users, and formulate this anti-jamming defense problem as a Stackelberg game with one leader and multiple followers. Specifically, the users and jammer independently and selfishly select their respective optimal strategies and obtain the optimal channels based on their own utilities. To derive the Stackelberg Equilibrium, a hierarchical learning framework is formulated, and a hierarchical learning algorithm (HLA) is proposed. In addition, the convergence performance of the proposed HLA algorithm is analyzed. Finally, we present simulation results to validate the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
30. Relative trajectory-driven virtual dynamic occlusal adjustment for dental restorations.
- Author
-
Tian, Sukun, Dai, Ning, Cheng, Xiaosheng, Li, Linlin, Sun, Yuchun, and Cui, Haihua
- Subjects
DENTAL fillings ,LAPLACIAN matrices ,VIRTUAL reality ,DEFORMATIONS (Mechanics) ,DENTISTS ,LAPLACIAN operator ,ALGORITHMS ,COMPUTER simulation ,CORRECTIVE orthodontics - Abstract
The abnormal occlusal contact can disrupt the coordination and health of the oral jaw system. Therefore, the dynamic adjustment of the occlusal surface is of great significance for assessing the status of occlusal contact and clarifying jaw factors of stomatognathic system diseases. To solve this problem, a trajectory subtraction algorithm based on screw theory to improve the accuracy of the occlusal movement trajectory is proposed in our paper. Driving by the relative trajectory, a virtual dynamic occlusal adjustment system is developed to realize 3D occlusal movement simulating, automatic occluding relation detection, and automatic occlusal adjustment. Furthermore, we adapt an active occlusal adjustment method based on Laplacian deformation to increase the contact areas of the occlusal surface, which can aid dentists to realize the automatic adjustment of the non-interference regions. As a consequence, the proposed subtraction algorithm is feasible and the root-mean-square is 0.097 mm, and the adjusted occlusal surface is more consistent with the natural occlusal morphology. Graphical abstract ᅟ. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Robust rendezvous for multi-robot system with random node failures: an optimization approach.
- Author
-
Park, Hyongju and Hutchinson, Seth
- Subjects
ROBOTS ,DYNAMICS ,ALGORITHMS ,COMPUTER simulation ,ROBUST control - Abstract
In this paper, we consider the problem of designing distributed control algorithms to solve the rendezvous problem for multi-robot systems with limited sensing, for situations in which random nodes may fail during execution. We first formulate a distributed solution based upon averaging algorithms that have been reported in the consensus literature. In this case, at each stage of execution a one-step sequential optimal control (i.e., näive greedy algorithm) is used. We propose a distributed stochastic optimal control algorithm that minimizes a mean-variance cost function for each stage, given that the probability distribution for possible node failures is known a priori, as well as a minimax version of the problem when the prior probability distribution is not known. We demonstrate via extensive numerical simulations that our proposed algorithm provides statistically better rendezvous task performance than contemporary algorithms in cases for which failures occur. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
32. Optimizing Adaptive Notifications in Mobile Health Interventions Systems: Reinforcement Learning from a Data-driven Behavioral Simulator.
- Author
-
Wang, Shihan, Zhang, Chao, Kröse, Ben, and van Hoof, Herke
- Subjects
COMPUTER simulation ,HEALTH care reminder systems ,REINFORCEMENT (Psychology) ,RESEARCH funding ,ARTIFICIAL neural networks ,TELEMEDICINE ,ALGORITHMS - Abstract
Mobile health (mHealth) intervention systems can employ adaptive strategies to interact with users. Instead of designing such complex strategies manually, reinforcement learning (RL) can be used to adaptively optimize intervention strategies concerning the user's context. In this paper, we focus on the issue of overwhelming interactions when learning a good adaptive strategy for the user in RL-based mHealth intervention agents. We present a data-driven approach integrating psychological insights and knowledge of historical data. It allows RL agents to optimize the strategy of delivering context-aware notifications from empirical data when counterfactual information (user responses when receiving notifications) is missing. Our approach also considers a constraint on the frequency of notifications, which reduces the interaction burden for users. We evaluated our approach in several simulation scenarios using real large-scale running data. The results indicate that our RL agent can deliver notifications in a manner that realizes a higher behavioral impact than context-blind strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Characteristics Analysis of the Fractional-Order Chaotic Memristive Circuit Based on Chua's Circuit.
- Author
-
Yang, Feifei and Li, Peng
- Subjects
CHAOTIC communication ,DECOMPOSITION method ,ALGORITHMS ,BIFURCATION diagrams ,ENTROPY ,COMPUTER simulation - Abstract
In this paper, a new fractional-order memristive circuit is defined based on canonical Chua's circuit and Voltage-controlled memristor model. The fractional-order chaotic system is solved by conformable Adomian decomposition method (CADM), and the complexity characteristics are analyzed through sample entropy (SampEn) algorithm. The complexity analysis results correspond to the bifurcation diagram and Lyapunov exponential spectrum, which shows that SampEn algorithm can effectively reflects complexity of chaotic system. What's more, the chaos diagrams of complexity with the two parameters variation and the three parameters variation are analyzed. The numerical simulation result indicates that the system parameters variable complexity can effectively reflect the randomness of the fractional-order chaotic system, and the system has rich dynamical performances. It provides the theoretical guidance and experimental evidence for fractional-order memristive chaotic circuit application in cryptography and secure communication. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Anti-interference distributed energy-efficient for multi-carrier millimeter-wave ultra-dense networks.
- Author
-
He, Yun, Shen, Min, Zhang, Meng, Pang, Yucai, and Zeng, Fanhui
- Subjects
NASH equilibrium ,GAME theory ,ALGORITHMS ,NUMERICAL analysis ,COMPUTER simulation - Abstract
This paper investigates an anti-interference energy-efficient power allocation scheme in the multi-carrier millimeter-wave (mmWave) ultra-dense networks. To suppress the severe intercell-interference, this work proposes a novel interference minimization scheme based on the non-cooperative game theory for energy-efficiency maximization. In each best response, the non-convex problem is transformed into some convex subproblems, and each is solved by a low-complexity stair water-filling (SWF) algorithm over some subintervals. The interference minimization scheme, together with the SWF algorithm, has been proven to converge to a unique Nash equilibrium point. Simulation results and numerical analysis show that the scheme displays significant energy-efficiency performance advantages over other iterative water-filling methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Comparison Between Simultaneous and Sequential Utilization of Safety and Efficacy for Optimal Dose Determination in Bayesian Model-Assisted Designs.
- Author
-
Li, Ran, Takeda, Kentaro, and Rong, Alan
- Subjects
SAFETY ,DRUG efficacy ,COMPUTER simulation ,CLINICAL drug trials ,DRUG dosage ,DRUG tolerance ,ANTINEOPLASTIC agents ,DRUG design ,PHARMACEUTICAL arithmetic ,COMPARATIVE studies ,DOSE-effect relationship in pharmacology ,TUMORS ,STATISTICAL models ,ONCOLOGY ,ALGORITHMS ,DRUG toxicity ,PHARMACODYNAMICS - Abstract
It has become quite common in recent early oncology trials to include both the dose-finding and the dose-expansion parts within the same study. This shift can be viewed as a seamless way of conducting the trials to obtain information on safety and efficacy hence identifying an optimal dose (OD) rather than just the maximum tolerated dose (MTD). One approach is to conduct a dose-finding part based solely on toxicity outcomes, followed by a dose expansion part to evaluate efficacy outcomes. Another approach employs only the dose-finding part, where the dose-finding decisions are made utilizing both the efficacy and toxicity outcomes of those enrolled patients. In this paper, we compared the two approaches through simulation studies under various realistic settings. The percentage of correct ODs selection, the average number of patients allocated to the ODs, and the average trial duration are reported in choosing the appropriate designs for their early-stage dose-finding trials, including expansion cohorts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. DOA estimation based on sum–difference coarray with virtual array interpolation concept.
- Author
-
Ding, Yarong, Ren, Shiwei, Wang, Weijiang, and Xue, Chengbo
- Subjects
INTERPOLATION ,ALGORITHMS ,DEGREES of freedom ,TOEPLITZ matrices ,COMPUTER simulation - Abstract
The sum–difference coarray is the union of difference coarray and the sum coarray, which is capable to obtain a higher number of degrees of freedom (DOF) than the difference coarray. However, this method fails to use all information provided by the coprime array because of the existence of holes. In this paper, we introduce the virtual array interpolation into the sum–difference coarray domain. After interpolating the virtual array, we estimate the DOA by reconstructing the covariance matrix to resolve an atomic norm minimization problem in a gridless way. The proposed method is gridless and can effectively utilize the DOF of a larger virtual array. Numerical simulation results verify the effectiveness and the superior performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Reliable Mark-Embedded Algorithm for Verifying Archived/Encrypted Image Contents in Presence Different Attacks with FEC Utilizing Consideration.
- Author
-
Nassar, Sabry S., Faragallah, Osama S., and El-Bendary, Mohsen A. M.
- Subjects
WIRELESS channels ,ADDITIVE white Gaussian noise channels ,PROBLEM solving ,ALGORITHMS ,IMAGE reconstruction algorithms ,COMPUTER simulation - Abstract
Due to the widely spreading of fake news utilizing image manipulation and its bad effects, this paper investigates efficient image contents verification and manipulation detection algorithm performance in presence different attacks and noisy wireless channel. The presented algorithm works based on making horizontal scanning of the image blocks after segmenting it to upper and lower partition, every one of them is divided to equal number of small blocks. These blocks marks the opposite block in another image petitions utilizing different transforms, DFT, DCT, WHT, and DWT transforms. This approach is evaluated and tested in presence different attacks and over the wireless noisy channel for measuring the reliability and robustness of the presented algorithm. The WHT, DCT and DWT based algorithm performed good. The computer simulations reveled the DWT-based approach provides images with littlie quality improving. The algorithm performance over the AWGN channel is bad at the low SNR. For solve this problem the simple and less-complex error control schemes have been utilized to decrease the required SNR for achieving accepted quality of the received image. The merging algorithm is proposed also based on utilizing the mark encrypted image and encrypt marked images verifications approaches. Finally, the computer simulations proved the reliability and robustness of the DWT-based approach and its high detection sensitivity for any image manipulation in the different testing scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Compressive RCS Measurements.
- Author
-
Zhao, Guoqiang, Li, Shiyong, Li, Zhangfeng, Liu, Fang, and Sun, Houjun
- Subjects
RADAR cross sections ,COMPRESSED sensing ,ACQUISITION of data ,COMPUTER simulation ,ALGORITHMS - Abstract
A compressive radar cross section (RCS) measurement method is presented in this paper. This method relies on the theory of compressive sensing (CS). We first show that the RCS data have sparse expansions in some proper basis. According to the theory of CS, the full RCS data can be recovered from the partial measured data by convex optimization algorithms. Comparisons of the compressive measurement method and the traditional measurement method are demonstrated by means of numerical simulations as well as by real data measured in the outdoor range. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
39. Turbo decoding of simple product codes in a two user binary adder channel employing the Bahl-Cocke-Jelinek-Raviv algorithm.
- Author
-
de Souza, I. M. M., Alcoforado, M. L. M. G., and da Rocha, V. C.
- Subjects
ALGORITHMS ,DECODERS & decoding ,CODING theory ,RANDOM noise theory ,COMPUTER simulation - Abstract
The main goal in this paper is an investigation of the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm applied in a turbo decoding scheme. Binary product codes are employed in a turbo coding scheme and the channel model considered is the two user binary adder channel (2-BAC) with additive white Gaussian noise. A trellis for two users is constructed for a pair of product codes tailored for use in the 2-BAC in order to employ the BCJR decoding algorithm. Computer simulation is employed to show that product codes on the 2-BAC, employing low-complexity component codes, produces considerable gain with few iterations under iterative BCJR decoding. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
40. SpringBoard: game-agnostic tool for scenario editing with meta-programming support.
- Author
-
Petrovic, Gajo and Fujita, Hamido
- Subjects
COMPUTER software ,EDUCATIONAL games ,COMPUTER simulation ,META-analysis ,ALGORITHMS - Abstract
Although we have recently seen an increase of good, free game engine editors, general purpose scenario (level) editors are still lagging behind in terms of functionalities and ease of use. Using them to create game scenarios can be difficult as they often expose general engine capabilities instead of limiting the toolset to fit game-specific requirements. They often require programming skills to use, which introduce additional user skill requirements, and configuring them for a specific game can be equally difficult. In this paper we have developed SpringBoard, an open source scenario editor for games using the SpringRTS engine. Extending it to support game and level requirements is achieved with multi-level meta-programming, while still providing a system that is integrated with the GUI editor and therefore intuitive to use. Our meta-programming system has support for trigger elements (events, functions and actions), custom (composite) data types, scoped data access, higher order functions and actions, and data synchronization mechanics. This novel approach allows us to have the full expressiveness of the underlying programming language, while exposing a user-friendly GUI that consists of terminology familiar to the domain expert. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Input-to-state stability for a class of discrete-time nonlinear input-saturated switched descriptor systems with unstable subsystems.
- Author
-
Liu, Yunlong, Wang, Juan, Gao, Cunchen, Tang, Shuhong, and Gao, Zairui
- Subjects
COMPUTER simulation ,NONLINEAR systems ,DISCRETE-time systems ,CLOSED loop systems ,ALGORITHMS - Abstract
This paper concerns the input-to-state stability (ISS) problems for a class of discrete-time nonlinear input-saturated switched descriptor systems (SDSs). An ISS criterion that only partial subsystems are exponentially stable is provided based on average dwell time method and discrete-time iterative algorithm. The proof difficulty is greatly decreased, and the switching controllers for the subsystems of the closed-loop SDSs are much simple and viable. Furthermore, the cost of the controllers is also greatly reduced. Finally, extensive simulation results are presented to illustrate the effectiveness of the developed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. An improved multi-objective optimization-based CICA method with data-driver temporal reference for group fMRI data analysis.
- Author
-
Shi, Yuhu, Zeng, Weiming, Tang, Xiaoyan, Kong, Wei, and Yin, Jun
- Subjects
FUNCTIONAL magnetic resonance imaging ,INDEPENDENT component analysis ,DIAGNOSTIC imaging ,DATA analysis ,DATA extraction ,ALGORITHMS ,BRAIN ,COMPUTER simulation ,FACTOR analysis ,DIGITAL image processing ,MAGNETIC resonance imaging ,RESEARCH funding - Abstract
Group independent component analysis (GICA) has been successfully applied to study multi-subject functional magnetic resonance imaging (fMRI) data, and the group independent component (GIC) represents the commonality of all subjects in the group. However, some studies show that the performance of GICA can be improved by incorporating a priori information, which is not always considered when looking for GICs in existing GICA methods. In this paper, we propose an improved multi-objective optimization-based constrained independent component analysis (CICA) method to take advantage of the temporal a priori information extracted from all subjects in the group by incorporating it into the computational process of GICA for group fMRI data analysis. The experimental results of simulated and real data show that the activated regions and the time course detected by the improved CICA method are more accurate in some sense. Moreover, the GIC computed by the improved CICA method has a higher correlation with the corresponding independent component of each subject in the group, which means that the improved CICA method with the temporal a priori information extracted from the group can better reflect the commonality of the subjects. These results demonstrate that the improved CICA method has its own advantages in fMRI data analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. Model reference self-learning fuzzy control method for automated mechanical clutch.
- Author
-
Yong Chen, Xiangyu Wang, Kai He, and Cai Yang
- Subjects
FUZZY control systems ,AUTOMOBILE clutches ,ALGORITHMS ,COMPUTER simulation ,MECHANICAL wear - Abstract
Automated mechanical clutch is a critical component in vehicle powertrain, whose operations have great influence on fuel economy, comfort, and drivability. However, the control of clutch is a challenging problem due to the nonlinearities and uncertainties, which has attracted extensive attention. This paper proposed a model reference self-learning fuzzy controlmethod with considering clutch wear. Fuzzy control is a simple but effective method to deal with the nonlinear system, but the determined fuzzy rules cannot be suitable for system variations due to clutch wear. Therefore, a self-learning algorithm based on referent model is introduced into fuzzy controller. Then, simulations are carried out in MATLAB/ SIMULINK, and the results show that the method solves the problem of clutch wear effectively compared with the fuzzy control without self-learning ability. Finally, a test bench is designed and experiments are carried out to verify the validity of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Modeling of semi-competing risks by means of first passage times of a stochastic process.
- Author
-
Sildnes, Beate and Lindqvist, Bo Henry
- Subjects
STOCHASTIC processes ,DISEASE relapse ,COPULA functions ,INFERENTIAL statistics ,COMPETING risks ,ALGORITHMS ,BIOMETRY ,BONE marrow transplantation ,COMPUTER simulation ,PROBABILITY theory ,RISK assessment ,STATISTICS ,RELATIVE medical risk ,STATISTICAL models - Abstract
In semi-competing risks one considers a terminal event, such as death of a person, and a non-terminal event, such as disease recurrence. We present a model where the time to the terminal event is the first passage time to a fixed level c in a stochastic process, while the time to the non-terminal event is represented by the first passage time of the same process to a stochastic threshold S, assumed to be independent of the stochastic process. In order to be explicit, we let the stochastic process be a gamma process, but other processes with independent increments may alternatively be used. For semi-competing risks this appears to be a new modeling approach, being an alternative to traditional approaches based on illness-death models and copula models. In this paper we consider a fully parametric approach. The likelihood function is derived and statistical inference in the model is illustrated on both simulated and real data. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
45. Computational Intelligence for Medical Imaging Simulations.
- Author
-
Chang, Victor
- Subjects
BRAIN physiology ,TUMOR genetics ,ALGORITHMS ,ARTIFICIAL intelligence ,COMPUTER simulation ,DIAGNOSTIC imaging - Abstract
This paper describes how to simulate medical imaging by computational intelligence to explore areas that cannot be easily achieved by traditional ways, including genes and proteins simulations related to cancer development and immunity. This paper has presented simulations and virtual inspections of BIRC3, BIRC6, CCL4, KLKB1 and CYP2A6 with their outputs and explanations, as well as brain segment intensity due to dancing. Our proposed MapReduce framework with the fusion algorithm can simulate medical imaging. The concept is very similar to the digital surface theories to simulate how biological units can get together to form bigger units, until the formation of the entire unit of biological subject. The M-Fusion and M-Update function by the fusion algorithm can achieve a good performance evaluation which can process and visualize up to 40 GB of data within 600 s. We conclude that computational intelligence can provide effective and efficient healthcare research offered by simulations and visualization. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
46. Unequal modulus decomposition and modified Gerchberg Saxton algorithm based asymmetric cryptosystem in Chirp-Z transform domain.
- Author
-
Sachin, Sachin, Kumar, Ravi, and Singh, Phool
- Subjects
IMAGE encryption ,ALGORITHMS ,COMPUTER simulation ,CAMERA operators ,COMPARATIVE studies - Abstract
In this paper, we have presented an image encryption method in Chirp-Z transform domain using unequal modulus decomposition (UMD) and modified Gerchberg–Saxton (GS) algorithm. The proposed encryption scheme is highly sensitive to the encryption keys and the modified GS algorithm introduces an additional layer of security. The validity of the proposed method is tested with various grayscale and binary images and the numerical simulation results are demonstrated for 'Cameraman', 'Medical' and Binary 'CUH' images. The presented results confirm the robustness of the proposed method against various existing attacks such as, the noise attack, special attack, statistical attack, and brute force attack. A comparative analysis with existing similar methods is also performed and the enhanced security and efficiency of the proposed method is verified. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. Single-pixel imaging of dynamic objects using multi-frame motion estimation.
- Author
-
Monin, Sagi, Hahamovich, Evgeny, and Rosenthal, Amir
- Subjects
DETECTORS ,DATA acquisition systems ,PIXELS ,ALGORITHMS ,COMPUTER simulation - Abstract
Single-pixel imaging (SPI) enables the visualization of objects with a single detector by using a sequence of spatially modulated illumination patterns. For natural images, the number of illumination patterns may be smaller than the number of pixels when compressed-sensing algorithms are used. Nonetheless, the sequential nature of the SPI measurement requires that the object remains static until the signals from all the required patterns have been collected. In this paper, we present a new approach to SPI that enables imaging scenarios in which the imaged object, or parts thereof, moves within the imaging plane during data acquisition. Our algorithms estimate the motion direction from inter-frame cross-correlations and incorporate it in the reconstruction model. Moreover, when the illumination pattern is cyclic, the motion may be estimated directly from the raw data, further increasing the numerical efficiency of the algorithm. A demonstration of our approach is presented for both numerically simulated and measured data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. A Robust Quantum Watermark Algorithm Based on Quantum Log-polar Images.
- Author
-
Qu, Zhiguo, Cheng, Zhenwen, Luo, Mingxing, and Liu, Wenjie
- Subjects
QUANTUM information science ,COMPUTER simulation ,QUBITS ,DIGITAL image watermarking ,ALGORITHMS - Abstract
Copyright protection for quantum image is an important research branch of quantum information technology. In this paper, based on quantum log-polar image (QUALPI), a new quantum watermark algorithm is proposed to better protect copyright of quantum image. In order to realize quantum watermark embedding, the least significant qubit (LSQb) of quantum carrier image is replaced by quantum watermark image. The new algorithm has good practicability for designing quantum circuits of embedding and extracting watermark image respectively. Compared to previous quantum watermark algorithms, the new algorithm effectively utilizes two important properties of log-polar sampling, i.e., rotation and scale invariances. These invariances make quantum watermark image extracted have a good robustness when stego image was subjected to various geometric attacks, such as rotation, flip, scaling and translation. Experimental simulation based on MATLAB shows that the new algorithm has a good performance on robustness, transparency and capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
49. A new joint channel equalization and estimation algorithm for underwater acoustic channels.
- Author
-
Li, Bo, Yang, Hongjuan, Liu, Gongliang, and Peng, Xiyuan
- Subjects
UNDERWATER acoustics ,ADAPTIVE equalization ,ESTIMATION theory ,COMPUTER simulation ,LEAST squares ,ALGORITHMS - Abstract
Underwater acoustic channel (UAC) is one of the most challenging communication channels in the world, owing to its complex multi-path and absorption as well as variable ambient noise. Although adaptive equalization could effectively eliminate the inter-symbol interference (ISI) with the help of training sequences, the convergence rate of equalization in sparse UAC decreased remarkably. Besides, channel estimation algorithms could roughly figure out channel impulse response and other channel parameters through several specific mathematical criterions. In this paper, a typical channel estimation method, least square (LS) algorithm, is applied in adaptive equalization to obtain the initial tap weights of least mean square (LMS) algorithm. Simulation results show that the proposed method significantly enhances the convergence rate of the LMS algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. A new fuzzy K-EVD orthogonal complement space clustering method.
- Author
-
Wen, Jiechang, Liu, Hailin, Zhang, Suxian, and Xiao, Mingqing
- Subjects
FUZZY clustering technique ,BLIND source separation ,PROTOTYPES ,COMPUTER simulation ,DECOMPOSITION method ,ALGORITHMS ,MATRICES (Mathematics) ,ORTHOGONAL systems - Abstract
This paper studies the problem of underdetermined blind source separation with the nonstrictly sparse condition. Different from current approaches in literature, we propose a new and more effective algorithm to estimate the mixing matrices resulted from noise output data sets. After we introduce a clustering prototype of orthogonal complement space and give an extension of the normal vector clustering prototype, a new method combing the fuzzy clustering and eigenvalue decomposition technique to estimate the mixing matrix is presented in order to deal with the nonstrictly sparse situation. A convergent algorithm for estimating the mixing matrices is established, and numerical simulations are given to demonstrate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.