2,369 results on '"Randomness"'
Search Results
2. Perfecting the Operative Management of the Performance Efficiency of Electric Power System's Unique Facilities.
- Author
-
Farhadzadeh, E. M., Muradaliyev, A. Z., Rafiyeva, T. K., and Ashurova, U. K.
- Abstract
Maintaining efficient operation of facilities that have been in operation in excess of their standard service life is among the most significant and complex problems faced in modern electric power systems. The urgency of this problem stems from the systematically growing fraction of such facilities, which has reached as high as 60% by now. The difficulties of solving it are caused by the necessity to develop methods and algorithms for quantitatively evaluating the operative performance efficiency (OPE). A method and an algorithm are proposed for quantitatively estimating the integral OPE indicator for unique facilities, i.e., facilities that do not have analogs for the specified combination of attribute varieties. By using the proposed approach, it becomes possible to obtain not only error-free estimates of technical and economic indicators (TEIs) but also, what is most important, a physical interpretation of the integral indicator. The multidimensional nature of the monthly average values of TEIs and nonrandom nature of samples from the totality of TEIs are factors that pose serious limitations to the application of the classic hypothesis tests. A new criterion that successfully overcomes these obstacles is developed. The critical values of integral indicators appearing in this criterion are determined by simulating possible realizations of the integral indicators. A smaller risk of obtaining an erroneous solution on the maintenance of unique facilities is achieved, due to which reliable methodical support for the enterprise management staff is ensured. The sequence of calculations carried out for a gas- and oil-fired 2400-MW condensing thermal power plant (CTPP) is illustrated. To make the data transformation manipulations more compact with concurrently retaining the possibility to compare them with the results from a quantitative estimation of the OPEs for facilities of the same type, the calculations are carried out only for certain levelized monthly average TEI values of the CTPP power unit boiler plants. By using the monthly average values of unique old power facilities, so-called oldtechs (UPOTs), it becomes possible to perform operative monitoring of their variation with time, and the changeover from actual to normalized values makes it possible to estimate the change in the UPOT technical state from the average wear and the degree of its maladjustment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Flight parameter calculation method of multi-projectiles using temporal and spatial information constraint
- Author
-
Xiao-qian Zhang and Han-shan Li
- Subjects
Computer science ,Projectile ,Mechanical Engineering ,Nuclear Theory ,Metals and Alloys ,Computational Mechanics ,Projectile motion ,Wavelet transform ,Function (mathematics) ,Constraint (information theory) ,Position (vector) ,Ceramics and Composites ,Nuclear Experiment ,Algorithm ,Blossom algorithm ,Randomness - Abstract
The dynamic parameters of multiple projectiles that are fired using multi-barrel weapons in high-frequency continuous firing modes are important indicators to measure the performance of these weapons. The characteristics of multiple projectiles are high randomness and large numbers launched in a short period of time, making it very difficult to obtain the real dispersion parameters of the projectiles due to the occlusion or coincidence of multiple projectiles. Using six intersecting-screen testing system, in this paper, we propose an association recognition and matching algorithm of multiple projectiles using a temporal and spatial information constraint mechanism. We extract the output signal from each detection screen and then use the wavelet transform to process the output signal. We present a method to identify and extract the time values on which the projectiles pass through the detection screens using the wavelet transform modulus maximum theory. We then use the correlation of the output signals of three parallel detection screens to establish a correlation coefficient recognition constraint function for the multiple projectiles. Based on the premise of linear projectile motion, we establish a temporal and spatial constraint matching model using the projectile's position coordinates in each detection screen and the projectile's time constraints within the multiple intersecting-screen geometry. We then determine the time values of the multiple projectiles in each detection screen using an iterative search cycle registration, and finally obtain the flight parameters for the multiple projectiles in the presence of uncertainty. The proposed method and algorithm were verified experimentally and can solve the problem of uncertainty in projectiles flight parameter under different multiple projectile firing states.
- Published
- 2023
4. Advanced Tailored Randomness: A Novel Approach for Improving the Efficacy of Biological Systems.
- Author
-
Ilan, Yaron
- Subjects
- *
BIOLOGICAL systems , *BIOPHYSICS , *SECOND law of thermodynamics , *QUANTUM theory , *DEFINITIONS , *QUANTUM thermodynamics - Abstract
Improving the function of biological systems represents a significant task in the fight to control health conditions and slow down the aging process. Applying the laws of physics to a biological system is difficult, due to the multiple parameters that must be considered at the cellular and whole-organ levels. The second law of thermodynamics states that entropy, a measure of randomness in an isolated system, increases over time. Based on this concept, randomness has been suggested as a means by which the efficacy of isolated biological systems may be improved. While classical and quantum physics uses different definitions of randomness and complexity, biological randomness is as an essential component of the intrinsic unpredictability of life. The manifestation of biological randomness may be different for different individuals, leading to differences in patient outcomes. In this work, an approach for enhancing the effectiveness of biological systems based on a novel concept of advanced tailored randomness in patient care is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Synchronization of Complex Dynamical Networks Subject to Noisy Sampling Interval and Packet Loss
- Author
-
Zhipei Hu, Peng Shi, and Hongru Ren
- Subjects
Computer Networks and Communications ,Computer science ,Stochastic process ,Sampling (statistics) ,Sampling error ,02 engineering and technology ,Computer Science Applications ,Artificial Intelligence ,Packet loss ,Control theory ,Synchronization (computer science) ,Categorical distribution ,0202 electrical engineering, electronic engineering, information engineering ,Probability distribution ,020201 artificial intelligence & image processing ,Algorithm ,Software ,Randomness - Abstract
This article focuses on the sampled-data synchronization issue for a class of complex dynamical networks (CDNs) subject to noisy sampling intervals and successive packet losses. The sampling intervals are subject to noisy perturbations, and categorical distribution is used to characterize the sampling errors of noisy sampling intervals. By means of the input delay approach, the CDN under consideration is first converted into a delay system with delayed input subject to dual randomness and probability distribution characteristic. To verify the probability distribution characteristic of the delayed input, a novel characterization method is proposed, which is not the same as that of some existing literature. Based on this, a unified framework is then established. By recurring to the techniques of stochastic analysis, a probability-distribution-dependent controller is designed to guarantee the mean-square exponential synchronization of the error dynamical network. Subsequently, a special model is considered where only the lower and upper bounds of delayed input are utilized. Finally, to verify the analysis results and testify the effectiveness and superiority of the designed synchronization algorithm, a numerical example and an example using Chua's circuit are given.
- Published
- 2022
6. STATER: Slit-Based Trajectory Reconstruction for Dense Urban Network With Overlapping Bluetooth Scanning Zones
- Author
-
Md. Mazharul Haque, Michael E. Cholette, Chintan Advani, and Ashish Bhaskar
- Subjects
Scanner ,Computer simulation ,Computer science ,Mechanical Engineering ,Computer Science Applications ,law.invention ,Bluetooth ,law ,Proof of concept ,Automotive Engineering ,Shortest path problem ,Path (graph theory) ,Trajectory ,Algorithm ,Randomness - Abstract
Availability of the big data from Bluetooth MAC Scanners (BMS) over the network provides opportunities to trace the movement of the individual Bluetooth-equipped vehicles on the network. However, BMS might not perfectly detect all the devices within its detection zone. For dense urban networks, the scanner scanning zones can significantly overlap, which complicates the detailed reconstruction of the vehicle trajectory. Addressing the need, this paper proposes a Slit based Trajectory Reconstruction (STATER) algorithm where for each BMS, a slit is defined that considers the overlap and connectivity with other BMS, and thereafter the trajectory is reconstructed considering the shortest path over the observed sequence of slits. A numerical simulation framework is proposed to thoroughly test the proposed STATER algorithm at various levels of ambiguity and randomness in the input dataset. The testing results indicate that the reconstructed trajectories could capture more than 90% (true positive) of the actual path and an average error (false positive) of 11.3% at different randomness levels considered in the experiments. As proof of concept, STATER is applied on one-day data from the entire Brisbane network with 0.56m trips, the computational performance for which supports its practical applicability.
- Published
- 2022
7. Joint Activity Detection and Channel Estimation in Massive MIMO Systems With Angular Domain Enhancement
- Author
-
Lei Sun, Wei Chen, Bo Ai, and Han Xiao
- Subjects
business.industry ,Computer science ,Applied Mathematics ,MIMO ,Bayesian inference ,Computer Science Applications ,Compressed sensing ,Code (cryptography) ,Overhead (computing) ,Wireless ,Electrical and Electronic Engineering ,business ,Algorithm ,Randomness ,Communication channel - Abstract
To support massive connectivity for sporadically active devices is a challenging task, as the randomness of the channel and the large number of users lead to enormous increase of communication overhead. Different to the existing methods that differentiate users in resources including time, frequency and code, we propose a new joint activity detection and channel estimation framework for massive multiple-input multiple-output (MIMO) systems, where angular domain information of active users is exploited to enhance activity detection and channel estimation. By exploiting the sporadic activity of users and the angular spread of the wireless signals, the activity detection and channel estimation is formulated as a compressive sensing problem with multiple measurement vectors, which has a simultaneously row-sparse and clustered sparse structure. The sizes and positions of the nonzero clusters are arbitrary, which brings new challenges for algorithm derivation. To this end, we develop new algorithms based on sparse Bayesian learning, where novel hyper-priors are proposed to capture the structural signal characteristics, and appropriate approximations are employed to facilitate algorithm derivations. Numerical experiments demonstrate the improved activity detection and channel estimation performance of the proposed approach in comparison to the existing methods.
- Published
- 2022
8. Toward Refined Nash Equilibria for the SET K-COVER Problem via a Memorial Mixed-Response Algorithm
- Author
-
Wei Sun, Huaxin Qiu, Qingrui Zhou, Changhao Sun, and Xiaochu Wang
- Subjects
Computer science ,Computation ,Partition (database) ,Computer Science Applications ,Human-Computer Interaction ,symbols.namesake ,Cover (topology) ,Control and Systems Engineering ,Nash equilibrium ,Sensor node ,Convergence (routing) ,symbols ,Electrical and Electronic Engineering ,Algorithm ,Wireless sensor network ,Software ,Randomness - Abstract
Area coverage and network lifetime are two contradictory issues to the architecture development of a wireless sensor network (WSN). A satisfactory balance could be achieved by deploying abundant sensor nodes randomly and dividing them into k exclusive cover sets. Toward self-organized partition with higher efficiency, we address the problem from the perspective of networked potential games and propose a memorial mixed-response algorithm (MMRA), which is implemented in a distributed and synchronous manner. Being viewed as a game player, each sensor node first updates its memory using a temporary action, which is generated by following a mixed response rule. After this, the coordination evolves into the next iteration by each player randomly drawing an action from its memory with equal probabilities. We prove that our algorithm converges with probability 1 to a convention of Nash equilibria, with the worst approximation ratio strictly larger than 0.5. Moreover, it is also found that a tradeoff between solution efficiency and computation time could be achieved via the adjustment of the amount of randomness introduced via the memory length m as well as the probability pₘ, where better partition results are more likely to be generated using a larger m and smaller pₘ. Comparisons with existing distributed methods demonstrate the superiority of our method in terms of solution refinement as well as convergence speed.
- Published
- 2022
9. Application of heuristic algorithms for design optimization of industrial heat pump
- Author
-
Hong Wone Choi, Bongsu Choi, Gilbong Lee, Junhyun Cho, Min Soo Kim, and Bong Seong Oh
- Subjects
Computer science ,Heuristic (computer science) ,Mechanical Engineering ,Particle swarm optimization ,Building and Construction ,law.invention ,Refrigerant ,Consistency (statistics) ,law ,Simulated annealing ,Genetic algorithm ,Algorithm ,Randomness ,Heat pump - Abstract
The design parameters of heat pumps are related to each other nonlinearly or in a complicated manner; therefore, it is difficult to determine the optimal combination of design parameters, such as superheat, subcooling, and refrigerant type, analytically. To address this limitation, three representative heuristic algorithms, namely the genetic algorithm (GA), particle swarm optimization (PSO), and simulated annealing (SA), are applied to optimize a heat pump under the given process conditions. Heuristic algorithms are driven based on randomness; thus, the consistency of the calculation results and computational time represent the decision criteria for the appropriate optimizer. The GA is unsuitable as a heat pump optimizer because it requires an excessive number of iterations. In contrast, PSO and SA have a similar capability of consistency and calculation time with a rational number of iterations. In conclusion, PSO exhibits a slightly better consistency and use of computational resources; therefore, PSO is selected as the heat pump design optimization algorithm in this study. The novelty of this work lies in that the related design parameters of the heat pump are simultaneously globally optimized with minimal physical background, and the heuristic algorithm that is most applicable to heat pump design optimization is determined.
- Published
- 2022
10. A More Accurate and Stable Batch Authentication Protocol for Large-Scale RFID Systems
- Author
-
Zhenguo Gao, Weidong Xiang, Haijun Wang, Scott Chih-Hao Huang, Liling Fan, and Yan Chen
- Subjects
Set (abstract data type) ,Identification (information) ,Authentication ,Control and Systems Engineering ,Computer science ,Authentication protocol ,Scale (descriptive set theory) ,Bloom filter ,Electrical and Electronic Engineering ,Protocol (object-oriented programming) ,Algorithm ,Randomness - Abstract
In this letter, a batch authentication protocol called as accurate bloom-filter-based batch authentication (ABBA) protocol is presented for large-scale radio-frequency identification (RFID) systems. Similar to its predecessor BBA, ABBA can estimate the number of invalid tags contained in an unknown tag set in one communication round by exploiting a bloom filter (BF) vector constructed distributively by the tags. Whereas different from BBA, ABBA adopts a novel BF vector construction method where the randomness caused by valid tags is removed, and thus the estimation is guaranteed to be more accurate and stable than BBA. Both theoretical analysis and extensive simulations confirm the superiority of ABBA over BBA.
- Published
- 2022
11. Reconfigurable Intelligent Surface Assisted Secret Key Generation in Quasi-Static Environments
- Author
-
Kailin Cao, Liquan Chen, Tianyu Lu, Junqing Zhang, and Aiqun Hu
- Subjects
Key generation ,Correlation coefficient ,Computer science ,Modeling and Simulation ,Monte Carlo method ,Key (cryptography) ,Electrical and Electronic Engineering ,Algorithm ,Protocol (object-oriented programming) ,Expression (mathematics) ,Randomness ,Computer Science Applications ,Communication channel - Abstract
We propose a key generation protocol with the aid of a reconfigurable intelligent surface (RIS) to boost secret key rate (SKR) in quasi-static environments. Considering a passive eavesdropper, we derive the closed-form expression of the lower and upper bounds of the SKR. Our findings indicate the SKR is determined by the number of RIS elements, the correlation coefficient, the pilot length and the quality of the reflecting channel. Our protocol fully exploits the randomness from the direct channel and the reflecting channel. Monte Carlo simulations validate the analytical expression of the SKR and demonstrate our protocol outperforms existing work.
- Published
- 2022
12. Stochastic Collocation Introduction Into Correlation Functions Method Applied for Underground Objects Detection
- Author
-
Motti Haridim and Reuven Zemach
- Subjects
Matching (graph theory) ,Computer science ,Computation ,Collocation (remote sensing) ,computer.software_genre ,Simulation software ,Correlation function (statistical mechanics) ,Ground-penetrating radar ,General Earth and Planetary Sciences ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Algorithm ,computer ,Randomness - Abstract
A method of space ensemble (SE) and time ensemble (TEs) correlation function technique applied on ground penetration radar (GPR) B-scan raw data proves to result in detailed information for tracking buried objects. These computations help in site allocations by time ensemble correlation functions (TESCs) and time stamps of objects scattering by space ensemble correlation functions (SECFs) which are precisely consistent with the simulation results. While these results are applied for given data stack, randomness in physical parameters of tested ground always occurs. An introduction of randomness to the physical parameters of tested models by creating stochastic collocation (SC) ensembles incorporating randomness to B-scan via successive A-scans, SC-SE, and SC-TE ensembles enables sensitivity analysis (SA) study of GPR raw data which gives a tool to change the physical properties of the ground in search for better ground matching. This method can be added to GPR machines or simulation software to enhance raw data analysis. A field experiment of small non-metallic objects' allocation was carried out as a platform for assessing the realistic performance of the method.
- Published
- 2022
13. A Novel Ship Detection Method via Generalized Polarization Relative Entropy for PolSAR Images
- Author
-
Junjun Yin, Jian Yang, Jing Wang, Huiping Lin, and Hongmiao Wang
- Subjects
Kullback–Leibler divergence ,Computer science ,Scattering ,Polarimetric synthetic aperture radar ,Kernel density estimation ,Clutter ,Electrical and Electronic Engineering ,Geotechnical Engineering and Engineering Geology ,Polarization (waves) ,Algorithm ,Randomness ,Constant false alarm rate - Abstract
In this letter, we present a novel ship detection method for polarimetric synthetic aperture radar (PolSAR) images. Generalized polarization relative entropy (GPRE) is proposed to measure the differences between the target and clutter in scattering mechanism, randomness, and intensity. Since it is difficult to derive a theoretical closed-form of the GPRE, we employ the kernel density estimation to model the distribution of the GPRE in ocean regions. Then, a constant false alarm rate (CFAR) ship detection method is proposed based on the estimated distribution. Experiments performed on both synthetic and real scene images demonstrate the effectiveness of the proposed method.
- Published
- 2022
14. High-Security Sequence Design for Differential Frequency Hopping Systems
- Author
-
Zan Li, Jia Shi, Rui Chen, Lei Guan, and Lie-Liang Yang
- Subjects
Sequence ,Computer Networks and Communications ,business.industry ,Computer science ,Hash function ,Encryption ,Computer Science Applications ,Public-key cryptography ,Control and Systems Engineering ,Frequency-hopping spread spectrum ,Standard algorithms ,Affine transformation ,Electrical and Electronic Engineering ,business ,Algorithm ,Randomness ,Computer Science::Cryptography and Security ,Information Systems - Abstract
Differential frequency hopping (DFH) technique is widely used in wireless communications by exploiting its capabilities of mitigating tracking interference and confidentiality. However, electronic attacks in wireless systems become more and more rigorous, which imposes a lot of challenges on the DFH sequences designed based on the linear congruence theory, fuzzy and chaotic theory, etc. In this article, we investigate the sequence design in DFH systems by exploiting the equivalence principle between the G-function algorithm and the encryption algorithm, in order to achieve high security. In more details, first, the novel G-function is proposed with the aid of the Government Standard algorithm and the Rivest–Shamir–Adleman algorithm. Then, two sequence design algorithms are proposed, namely, the G-function-assisted sequence generation algorithm, which takes the full advantages of the symmetric and asymmetric encryption algorithms, and the high-order G-function-aided sequence generation algorithm, which is capable of enhancing the correlation of the elements in a DFH sequence. Moreover, the security and ergodicity performance of the proposed algorithms are analyzed. Our studies and results show that the DFH sequences generated by the proposed algorithms significantly outperform the sequences generated by the reversible hash algorithm and affine transformation in terms of the uniformity, randomness, complexity, and the security.
- Published
- 2021
15. Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on Gaussian and Random Orthogonal Ensembles
- Author
-
Yue Lu
- Subjects
Principle of deferred decision ,Gaussian ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Computer Science Applications ,Matrix decomposition ,symbols.namesake ,Matrix (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Rotational invariance ,Algorithm ,Random matrix ,Randomness ,Information Systems ,Sparse matrix ,Mathematics - Abstract
This paper proposes a new algorithm, named Householder Dice (HD), for simulating dynamics on dense random matrix ensembles with rotational invariance. Examples include the Gaussian ensemble, the Haar-distributed random orthogonal ensemble, and their complex-valued counterparts. A “direct” approach to the simulation, where one first generates a dense $n \times n$ matrix from the ensemble, requires at least $\mathcal {O}(n^{2})$ resource in space and time. The HD algorithm overcomes this $\mathcal {O}(n^{2})$ bottleneck by using the principle of deferred decisions: rather than fixing the entire random matrix in advance, it lets the randomness unfold with the dynamics. At the heart of this matrix-free algorithm is an adaptive and recursive construction of (random) Householder reflectors. These orthogonal transformations exploit the group symmetry of the matrix ensembles, while simultaneously maintaining the statistical correlations induced by the dynamics. The memory and computation costs of the HD algorithm are $\mathcal {O}(nT)$ and $\mathcal {O}(nT^{2})$ , respectively, with $T$ being the number of iterations. When $T \ll n$ , which is nearly always the case in practice, the new algorithm leads to significant reductions in runtime and memory footprint. Numerical results demonstrate the promise of the HD algorithm as a new computational tool in the study of high-dimensional random systems.
- Published
- 2021
16. Improving the stability of label propagation algorithm by propagating from low-significance nodes for community detection in social networks
- Author
-
Saeid Taghavi Afshord, Saeid Aghaalizadeh, Babak Anari, and Asgarali Bouyer
- Subjects
Numerical Analysis ,Rank (linear algebra) ,Computer science ,Node (networking) ,Social network analysis (criminology) ,Process (computing) ,Stability (learning theory) ,Joins ,Computer Science Applications ,Theoretical Computer Science ,Computational Mathematics ,Computational Theory and Mathematics ,Ranking ,Algorithm ,Software ,Randomness - Abstract
Community detection is regarded as a significant research domain in social network analysis. Thanks to its merits, including linear-time complexity, performance and simplicity, the label propagation algorithm (LPA) has attracted a lot of researchers’ interests in recent years. However, regarding the label propagation process, it has some drawbacks such as uncertainty and randomness behavior which may negatively impact the stability and accuracy of community detection. In this paper, a simple and fast method is proposed to overcome the LPA’s drawbacks. We have proposed a novel method to improve the certainty and stability of the label propagation algorithm based on node ranking in a social network, called the CSLPR algorithm. After performing a proposed local method for ranking nodes, the label propagation is started from the nodes with lowest rank. In the first and second iterations $$\left( t 2 \right) $$ , a new criterion, known as label strength, is applied to select the label with the highest strength of a node. In real-world social networks, communities are established around important nodes. Therefore, when a low-significance node joins the networks, it receives the label of a community $${C_{i}}$$ that has the most connection to this community. The proposed method is evaluated on several real-world and artificial networks. The results reveal that the proposed method can accurately detect communities with higher performance, certainty, and stability in comparison to other methods.
- Published
- 2021
17. Tb/s Fast Random Bit Generation Based on a Broadband Random Optoelectronic Oscillator
- Author
-
Ming Li, Ye Xiao, Tengfei Hao, Zengting Ge, and Wei Li
- Subjects
Physics ,Broadband ,Entropy (information theory) ,NIST ,Waveform ,Sample (statistics) ,Electrical and Electronic Engineering ,Wideband ,Algorithm ,Bitwise operation ,Atomic and Molecular Physics, and Optics ,Randomness ,Electronic, Optical and Magnetic Materials - Abstract
We experimentally demonstrate Tb/s fast random bit generation from random signals generated in a broadband random optoelectronic oscillator. The broadband random optoelectronic oscillator is used as photoelectron entropy source to generate wideband random signal. We sample the random waveform with a resolution of 10 bits and a sampling rate of 128 GS/s. All 10 bits can be preserved and a generation rate of 2.5 Tb/s (128 GS/s $\times $ 10 bits $\times $ 2 data) is achieved by using time-shift bit-order-reverse bitwise exclusive-or operation as a more complicated post-processing method. The randomness of random bit sequences is verified by using NIST Special Publication 800–22 statistical tests.
- Published
- 2021
18. Particle Swarm Optimization Algorithm With Self-Organizing Mapping for Nash Equilibrium Strategy in Application of Multiobjective Optimization
- Author
-
Chenhui Zhao and Donghui Guo
- Subjects
Artificial neural network ,Computer Networks and Communications ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Particle swarm optimization ,Multi-objective optimization ,Computer Science Applications ,symbols.namesake ,Nonlinear system ,Artificial Intelligence ,Nash equilibrium ,symbols ,Algorithm ,Software ,Randomness - Abstract
In this article, the Nash equilibrium strategy is used to solve the multiobjective optimization problems (MOPs) with the aid of an integrated algorithm combining the particle swarm optimization (PSO) algorithm and the self-organizing mapping (SOM) neural network. The Nash equilibrium strategy addresses the MOPs by comparing decision variables one by one under different objectives. The randomness of the PSO algorithm gives full play to the advantages of parallel computing and improves the rate of comparison calculation. In order to avoid falling into local optimal solutions and increase the diversity of particles, a nonlinear recursive function is introduced to adjust the inertia weight, which is called the adaptive particle swarm optimization (APSO). In addition, the neighborhood relations of current particles are constructed by SOM, and the leading particles are selected from the neighborhood to guide the local and global search, so as to achieve convergence. Compared with several advanced algorithms based on the eight multiobjective standard test functions with different Pareto solution sets and Pareto front characteristics in examples, the proposed algorithm has a better performance.
- Published
- 2021
19. BPSL: a new rumor source location algorithm based on the time-stamp back propagation in social networks
- Author
-
Shiqi Sai, Moji Wei, and Liqing Qiu
- Subjects
Ranking ,Observer (quantum physics) ,Artificial Intelligence ,Computer science ,Node (networking) ,Maximization ,Timestamp ,Rumor ,Algorithm ,Backpropagation ,Randomness - Abstract
Finding a rumor source is a major issue in the analysis of social networks. In this problem, the rumor source is usually estimated from a given diffusion snapshot. How to estimate the rumor source accurately is a challenging problem. Usually, the rumor source location problem is regarded as a node ranking problem. However, most of the existing algorithms ignore the structure of the infected subgraph or the randomness of the rumor spread. Therefore, they have defects in applicability and accuracy. To solve this problem, this paper takes into account the above two aspects at the same time, and propose a new algorithm to locate the rumor source, which is called Back Propagation Source Location(BPSL). The proposed algorithm contains an estimation method which is based on the time-stamp back propagation. This method makes the proposed algorithm’s accuracy outperform previous algorithms’ accuracy. Moreover, the susceptible-infected model is used to simulate the information spread of the networks. The steps of the proposed algorithm can be stated as follows. First, a new method based on the influence maximization is proposed to determine the observer set, which can greatly reduce the number of observer nodes. Second, a new estimation method based on the time-stamp back propagation is proposed to locate the source, which makes the proposed algorithm more accuracy and doesn’t change the structure of infected subgraph at the same time. Finally, the experimental results on two artificial networks and four real-world networks show the superiority of the proposed algorithm.
- Published
- 2021
20. Research on Path Planning Algorithm of Autonomous Vehicles Based on Improved RRT Algorithm
- Author
-
Guanghao Huang and Qinglu Ma
- Subjects
Computer science ,General Neuroscience ,Applied Mathematics ,Aerospace Engineering ,Sampling (statistics) ,Function (mathematics) ,Computer Science Applications ,Path length ,Control and Systems Engineering ,Automotive Engineering ,Random tree ,Path (graph theory) ,Range (statistics) ,Motion planning ,Algorithm ,Software ,Randomness ,Information Systems - Abstract
Recently, the path planning has become one of the key research hot issues in the field of autonomous vehicles, which has attracted the attention of more and more related researchers. When RRT (Rapidly-exploring Random Tree) algorithm is used for path planning in complex environment with a large number of random obstacles, the obtained path is twist and the algorithm cannot converge quickly, which cannot meet the requirements of autonomous vehicles’ path planning. This paper presents an improved path planning algorithm based on RRT algorithm. Firstly, random points are generated using the circular sampling strategy, which ensures the randomness of the original RRT algorithm and improves the sampling efficiency. Secondly, an extended random point rule based on cost function is designed to filter random points. Then consider the vehicle corner range when choosing the adjacent points, select the appropriate adjacent points. Finally, the B-spline curve is used to simplify and smooth the path. The experimental results show that the quality of the path planned by the improved RRT algorithm in this paper is significantly improved compared with the RRT algorithm and the B-RRT (Bidirectional RRT) algorithm. This can be seen from the four aspects of the time required to plan the path, mean curvature, mean square deviation of curvature and path length. Compared with the RRT algorithm, they are reduced by 55.3 %, 68.78 %, 55.41 % and 19.5 %; compared with the B-RRT algorithm, they are reduced by 29.5 %, 64.02 %, 39.51 % and 11.25 %. The algorithm will make the planned paths more suitable for autonomous vehicles to follow.
- Published
- 2021
21. Cascade-sine chaotification model for producing chaos
- Author
-
Qiujie Wu
- Subjects
Pseudorandom number generator ,Computer science ,Applied Mathematics ,Mechanical Engineering ,Chaotic ,Aerospace Engineering ,Ocean Engineering ,Nonlinear Sciences::Chaotic Dynamics ,Nonlinear system ,Transformation (function) ,Control and Systems Engineering ,Cascade ,Sine ,Electrical and Electronic Engineering ,Algorithm ,Randomness ,Curse of dimensionality - Abstract
Motivated by the concept of cascade operator and sine enhanced model, this paper proposes a universal framework termed cascade-sine chaotification model (CSCM) for producing chaotic maps. The principle of the proposed method is to perform sine transformation on the outputs of the seed maps, and then cascade the results to construct new systems. Compared with one-dimensional (1D) nonlinear models, the CSCM can produce diverse chaotic maps with arbitrary dimensionality by combining existing seed maps. Moreover, the chaotic maps generated by CSCM possess extremely higher complexity and larger chaotic ranges than the seed maps. To verify the effectiveness of CSCM, four chaotic maps generated by CSCM are presented as examples, of which the dynamical properties are systematically analyzed. Simulation results indicate the generated systems perform robust hyperchaotic properties in a large parameter range. Finally, a pseudo-random number generator (PRNG) is introduced to investigate the practicability of CSCM and the test results demonstrate the randomness of the obtained chaotic sequences.
- Published
- 2021
22. Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning
- Author
-
Zichao Liu, Yi Lu, Xiangzhi Bai, Ying Chen, Peng Wang, Sheng Guo, Junzhang Chen, and Darui Jin
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Process (computing) ,Iterative reconstruction ,Human-Computer Interaction ,Optical path ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Representation (mathematics) ,business ,Spatial analysis ,Algorithm ,Software ,Randomness ,Free-space optical communication - Abstract
A turbulent medium with eddies of different scales gives rise to fluctuations in the index of refraction during the process of wave propagation, which interferes with the original spatial relationship, phase relationship and optical path. The outputs of two-dimensional imaging systems suffer from anamorphosis brought about by this effect. Randomness, along with multiple types of degradation, make it a challenging task to analyse the reciprocal physical process. Here, we present a generative adversarial network (TSR-WGAN), which integrates temporal and spatial information embedded in the three-dimensional input to learn the representation of the residual between the observed and latent ideal data. Vision-friendly and credible sequences are produced without extra assumptions on the scale and strength of turbulence. The capability of TSR-WGAN is demonstrated through tests on our dataset, which contains 27,458 sequences with 411,870 frames of algorithm simulated data, physical simulated data and real data. TSR-WGAN exhibits promising visual quality and a deep understanding of the disparity between random perturbations and object movements. These preliminary results also shed light on the potential of deep learning to parse stochastic physical processes from particular perspectives and to solve complicated image reconstruction problems given limited data. Turbulent optical distortions in the atmosphere limit the ability of optical technologies such as laser communication and long-distance environmental monitoring. A new method using adversarial networks learns to counter the physical processes underlying the turbulence so that complex optical scenes can be reconstructed.
- Published
- 2021
23. Affine invariance of meta-heuristic algorithms
- Author
-
GuangYu Zhu and ZhongQuan Jian
- Subjects
education.field_of_study ,Information Systems and Management ,Property (programming) ,Computer science ,Orientation (computer vision) ,Coordinate system ,Population ,Particle swarm optimization ,Affine invariance ,Computer Science Applications ,Theoretical Computer Science ,Artificial Intelligence ,Control and Systems Engineering ,Differential evolution ,education ,Algorithm ,Software ,Randomness - Abstract
An algorithm whose performance depends on the objective function being aligned with a privileged coordinate system is a poor choice in general because it is unlikely that the optimal orientation will be known in advance. In this paper, a property of meta-heuristic algorithms, named affine invariance, is introduced to verify whether the algorithm is depended on the privileged coordinate system or not. The concept of affine invariance is described in detail, and some classical algorithms, efficient in most test and actual problems, are proved to be affine invariant. While some recent algorithms in the literature are proved to be not affine invariant. As a conclusion, particle swarm optimization (PSO), differential evolution (DE) and optimal foraging algorithm (OFA) are affine invariant, while grey wolf optimizer (GWO), sine cosine algorithm (SCA) and butterfly optimization algorithm (BOA) are not affine invariant. Furthermore, comparison tests are designed to support the theoretical analysis results. In these tests, same random numbers and initial population are used to avoid the influence of randomness, thus, the conclusion is reliable.
- Published
- 2021
24. Learning ability of iterative learning control system with a randomly varying trial length
- Author
-
Yamiao Zhang, Jian Liu, and Xiaoe Ruan
- Subjects
Control and Systems Engineering ,Computer science ,Iterative learning control ,Algorithm ,Trial Length ,Randomness ,Computer Science Applications ,Theoretical Computer Science - Abstract
This paper investigates the learning ability of iterative learning control (ILC) system with a randomly varying trial length (RVTL). The randomness of trial length is modelled as a discrete stochas...
- Published
- 2021
25. Attack-Resistant and Efficient Cancelable Codeword Generation Using Random walk-Based Methods
- Author
-
Divyanshi Sinha, Fagul Pandey, and Priyabrata Dash
- Subjects
Scheme (programming language) ,Multidisciplinary ,Template ,Similarity (geometry) ,Biometrics ,Computer science ,Fingerprint (computing) ,Code word ,Random walk ,Algorithm ,computer ,Randomness ,computer.programming_language - Abstract
Securely handling the biometric information of an individual is still a major challenge in many applications. For this reason, many cancelable techniques are present which were proposed with an aim to provide security, but adversarial attacks like a similarity-based attack on popular methods have recently been reported in the literature. In this paper, we propose a random walk-based method for cancelable template generation (2-d random walk and circular random walk). The novelty of the proposed method is to generate a secure, distinct cancelable template from fingerprint data. Further, the proposed scheme is immune to different security attacks. It also ensures the randomness of the generated cancelable templates. Our proposed scheme achieves comparable performance in terms of average genuine acceptance rate of $$97.02 \%$$ , average false acceptance rate of $$0.13 \%$$ for different fingerprint data. Moreover, our proposed scheme has FAR (attack) of $$11.11\%$$ , which is very low compared to the state-of-the-art method (such as BioHashing 62.5%).
- Published
- 2021
26. Calibration of Agent-Based Models by Means of Meta-Modeling and Nonparametric Regression
- Author
-
Siyan Chen and Saul Desiderio
- Subjects
Polynomial regression ,Computer science ,Economics, Econometrics and Finance (miscellaneous) ,Monte Carlo method ,Feature (machine learning) ,Sampling (statistics) ,Indirect Inference ,Parameter space ,Algorithm ,Randomness ,Computer Science Applications ,Nonparametric regression - Abstract
Taking agent-based models to the data is still very challenging for researchers. In this paper we propose a new method to calibrate the model parameters based on indirect inference, which consists in minimizing the distance between real and artificial data. Basically, we first introduce a nonparametric regression meta-model to approximate the relationship between model parameters and distance. Then the meta-model is estimated by local polynomial regression estimation on a small sample of parameter vectors drawn from the parameter space of the ABM. Finally, once the distance has been estimated we can pick the parameter vector minimizing it. One innovative feature of the method is the sampling scheme, based on sampling at the same time both the parameter vectors and the seed of the random numbers generator in a random fashion, which permits to average out the effect of randomness without resorting to Monte Carlo simulations. A battery of simple calibration exercises performed on an agent-based macro model shows that the method allows to minimize the distance with good precision using relatively few simulations of the model.
- Published
- 2021
27. Improve concentration of frequency and time (ConceFT) by novel complex spherical designs
- Author
-
Robert S. Womersley, Hau-Tieng Wu, Matt Sourisseau, Yu Guang Wang, and Wei-Hsuan Yu
- Subjects
Series (mathematics) ,Computer science ,Applied Mathematics ,010102 general mathematics ,010103 numerical & computational mathematics ,01 natural sciences ,Signal ,Time–frequency analysis ,Multitaper ,Sleep (system call) ,Stage (hydrology) ,0101 mathematics ,Spherical design ,Algorithm ,Randomness ,Mathematics - Abstract
Concentration of frequency and time (ConceFT) is a generalized multitaper algorithm introduced to analyze complicated non-stationary time series. To avoid the randomness in the original ConceFT algorithm, we apply the novel complex spherical design technique to standardize ConceFT, which we coin CQU-ConceFT. The proposed CQU-ConceFT is applied to visualize the spindle structure in the electroencephalogram signal during the N2 sleep stage and other physiological time series.
- Published
- 2021
28. A Novel Approach Based on Average Swarm Intelligence to Improve the Whale Optimization Algorithm
- Author
-
Serkan Dereli
- Subjects
Euclidean distance ,Multidisciplinary ,Local optimum ,Fitness function ,Computer science ,Position (vector) ,Convergence (routing) ,Swarm behaviour ,Algorithm ,Swarm intelligence ,Randomness - Abstract
In this study, a new technique has been introduced by changing the convergence of the whale optimization algorithm, which has the principle of approaching its prey by following the pack leader strictly. For this, first of all, average position values of the swarm were obtained in each iteration. Later, when the "p" parameter, which is used to add randomness to the progress of the swarm members, is below a certain value, the swarm average was used for each individual to move to the new position. Thus, slow convergence and frequent falling to the local optimum which is considered to be the biggest shortcoming of the algorithm, has been eliminated. The distance of whales from each other and from prey was modeled as a fitness function and the Euclidean distance formula was used for this. A complex engineering problem was chosen to reveal the power of both the classical whale optimization algorithm and the algorithm that includes the proposed new technique. As a result, this new technique introduced has provided a 10 million times improvement in solving this complex engineering problem used in the control of serial robot manipulators.
- Published
- 2021
29. An Approach to 1/f Noise Detection Based on Adaptive T-ATFPF Algorithm
- Author
-
Xiaojuan Chen, Jie Wu, and Zhaohua Zhang
- Subjects
Quality (physics) ,Computer science ,General Chemical Engineering ,Shot noise ,Process (computing) ,General Materials Science ,White noise ,Signal ,Noise (electronics) ,Chebyshev filter ,Algorithm ,Industrial and Manufacturing Engineering ,Randomness - Abstract
The generation of 1/f noise is closely related to the quality defects of IGBT devices. In the process of detecting IGBT single tube noise, thermal noise and shot noise show obvious white noise characteristics in the low frequency band, which are detected under the background of strong white noise 1/f noise can characterize the performance of IGBT devices. Therefore, on the basis of the Time-Frequency Peak Filtering (TFPF) algorithm, a two-dimensional time-domain adaptive T-ATFPF algorithm is proposed, and the adaptive segmentation is realized by means of the confidence interval crossing criterion based on Chebyshev’s inequality. Variable window length,use a small window length to process the signal section, which retains more detailed information of the effective signal.Use a larger window length to process the buffer section to ensure a smooth transition.Use the large window length to process the noise section, which more effectively suppresses randomness for noise, apply T-ATFPF to artificial synthesis model and actual model. Experimental results indicate that compared with the conventional algorithm, the improved method can better recover 1/f noise, and the ratio of signal to noise is greatly improved by about 1.3dB.
- Published
- 2021
30. Analyze the Effectiveness of the Algorithm for Agricultural Product Delivery Vehicle Routing Problem Based on Mathematical Model
- Author
-
Kairong Yu, Yang Liu, and Ashutosh Sharma
- Subjects
Computer science ,Heuristic (computer science) ,media_common.quotation_subject ,Quality (business) ,Motion planning ,Routing (electronic design automation) ,Service provider ,Algorithm ,Time complexity ,Hybrid algorithm ,Randomness ,Information Systems ,media_common - Abstract
With the recent development in the economic system, the requirement for logistic services has also increased gradually. This increased the demand for efficient and cost-effective delivery services without compromising the quality and timeliness. This has become a challenge to the logistic service providers to maintain the high-quality standards along with reliable delivery services. A mathematical equation model is proposed in this work to solve the problem of random quantity of agricultural products collected/distributed by working vehicle collection/distribution path planning. This article proposes a hybrid algorithm which combines the taboo algorithm search and the taboo hybrid algorithm to solve the problem. In the proposed algorithm, a large-scale problem is several small-scale problems to reduce the time complexity of the algorithm. Since randomness is much more complicated than certain types of problems, accurate algorithms can only be applied to a small range of problem types. The heuristic calculations involved in the development of algorithms make it a convenient simplified tool for the collection and distribution of random agricultural products. An average validation accuracy of 94% has been obtained for the proposed algorithm after completing 200 iterations while obtaining 94.37%, 94.57%, and 94.56% precision, recall, and F-score values, respectively.
- Published
- 2021
31. Comparison of Pseudorandom Number Generators and Their Application for Uncertainty Estimation Using Monte Carlo Simulation
- Author
-
Karan Malik, Jiji Pulikkotil, and Anjali Sharma
- Subjects
Pseudorandom number generator ,Physics and Astronomy (miscellaneous) ,Random number generation ,Computer science ,Monte Carlo method ,Degrees of freedom (statistics) ,Estimator ,Probability distribution ,Measurement uncertainty ,Algorithm ,Randomness - Abstract
Generating random numbers is prerequisite to any Monte Carlo method implemented in a computer program. Therefore, identifying a good random number generator is important to guarantee the quality of the output of the Monte Carlo method. However, sequences of numbers generated by means of algorithms are not truly random, but having certain control on its randomness essentially makes them pseudo-random. What then matters the most is that the simulation of a physical variable with a probability distribution, needs to have the same distribution generated by the algorithm itself. In this perspective, considering the example of gauge block calibration given in "Evaluation of measurement data—Supplement 1 to the “Guide to the expression of uncertainty in measurement”—Propagation of distributions using a Monte Carlo method", we explore the properties and output of three commonly used random number generators, namely the linear congruential (LC) generator, Wichmann-Hill (WH) generator and the Mersenne-Twister (MT) generator. Extensive testing shows that the performance of the MT algorithm transcends that of LC and WH generators, particularly in its time of execution. Further, these generators were used to estimate the uncertainty in the measurement of the length, with input variables having different probability distributions. While, in the conventional GUM approach the output distribution appears to be Gaussian-like, we from our Monte-Carlo calculations find it to be a students' t-distribution. Applying the Welch-Satterthwaite equation to the result of the Monte Carlo simulation, we find the effective degrees of freedom to be 16. On the other hand, using a trial–error fitting method to determine the nature of the output PDF, we find that the resulting distribution is a t-distribution with 46 degrees of freedom. Extending these results to calculate the expanded uncertainty, we find that the Monte-Carlo results are consistent with the recently proposed mean/median-based unbiased estimators which takes into account the artifact of transformation distortion.
- Published
- 2021
32. A new pseudorandom bits generator based on a 2D-chaotic system and diffusion property
- Author
-
Omar Akif, Rasha Ali, Rasha S. Ali, Rasha Subhi, and Prof.Dr. Alaa Farhan
- Subjects
Pseudorandom number generator ,Sequence ,Control and Optimization ,Computer Networks and Communications ,Computer science ,business.industry ,Autocorrelation ,Chaotic ,Cryptography ,Permutation ,Hardware and Architecture ,Control and Systems Engineering ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,business ,Instrumentation ,Algorithm ,Stream cipher ,Randomness ,Information Systems - Abstract
A remarkable correlation between chaotic systems and cryptography has been established with sensitivity to initial states, unpredictability, and complex behaviors. In one development, stages of a chaotic stream cipher are applied to a discrete chaotic dynamic system for the generation of pseudorandom bits. Some of these generators are based on 1D chaotic map and others on 2D ones. In the current study, a pseudorandom bit generator (PRBG) based on a new 2D chaotic logistic map is proposed that runs side-by-side and commences from random independent initial states. The structure of the proposed model consists of the three components of a mouse input device, the proposed 2D chaotic system, and an initial permutation (IP) table. Statistical tests of the generated sequence of bits are investigated by applying five evaluations as well as the ACF and NIST. The results of five standard tests of randomness have been illustrated and overcome a value of 0.160 in frequency test. While the run test presents the pass value t0=4.769 and t1=2.929. Likewise, poker test and serial test the outcomes was passed with 3.520 for poker test, and 4.720 for serial test. Finally, autocorrelation test passed in all shift numbers from 1 to 10.
- Published
- 2021
33. Novel Deterministic Angular Sampling Methods for 3D Channel Models
- Author
-
Weimin Wang, Heng Wang, Yuanan Liu, and Yongle Wu
- Subjects
Emulation ,Spatial correlation ,Property (programming) ,Computer science ,Computation ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Power iteration ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,Ergodic theory ,Electrical and Electronic Engineering ,Algorithm ,Randomness ,Communication channel - Abstract
For the purpose of emulating realistic ray-based channels more exactly, two novel sum-of-cisoids (SOC) channel simulator parameters computation strategies, i.e., the bidirectional allocation (BA) and the simplified forward allocation (FA) methods, are proposed in this letter. Compared with the classical equal power method, the asymmetric computing property of these two methods further improves the spatial correlation emulation accuracy and significantly mitigates the intra-cluster correlation of sub-paths. As shown in the simulation results, both BA and FA methods enable the statistical spatial-temporal characteristics of emulated channel match extremely well with that of the target channel. More important, the proposed processes are deterministic, and will not introduce randomness in different realizations. Hence the proposed methods are always ergodic with arbitrary angle spreads.
- Published
- 2021
34. Learning Tracking Control Over Unknown Fading Channels Without System Information
- Author
-
Xinghuo Yu and Dong Shen
- Subjects
Computer Networks and Communications ,Computer science ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Kalman filter ,Computer Science Applications ,Transmission (telecommunications) ,Artificial Intelligence ,Control system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Fading ,Random variable ,Algorithm ,Software ,Randomness ,Computer Science::Information Theory ,Communication channel - Abstract
A novel data-driven learning control scheme is proposed for unknown systems with unknown fading sensor channels. The fading randomness is modeled by multiplicative and additive random variables subject to certain unknown distributions. In this scheme, we propose an error transmission mode and an iterative gradient estimation method. Unlike the conventional transmission mode where the output is directly transmitted back to the controller, in the error transmission mode, we send the desired reference to the plant such that tracking errors can be calculated locally and then transmitted back through the fading channel. Using the faded tracking error data only, the gradient for updating input is iteratively estimated by a random difference technique along the iteration axis. This gradient acts as the updating term of the control signal; therefore, information on the system and the fading channel is no longer required. The proposed scheme is proved effective in tracking the desired reference under random fading communication environments. Theoretical results are verified by simulations.
- Published
- 2021
35. Real-time implementation of a chaos based cryptosystem on low-cost hardware
- Author
-
Asma Adnane, Lahcene Merah, Adda Ali-Pacha, Saadi Ramdani, and Naima Hadj-Said
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Chaotic ,Energy Engineering and Power Technology ,Plaintext ,Cryptography ,Encryption ,Set (abstract data type) ,Hénon map ,Signal Processing ,Cryptosystem ,Computer Vision and Pattern Recognition ,Electrical and Electronic Engineering ,business ,Algorithm ,Randomness ,Computer Science::Cryptography and Security - Abstract
To overcome the worse statistical proprieties of digital chaotic maps, this paper proposes a modified version of the Henon chaotic map using a cosine function. What is new in this proposal is that the cosine function replaces only the nonlinear part of the chaotic map which increases the chaotic proprieties compared to other proposals. On the other hand, adding a constant with a high value ( $$F > 10^4$$ ) to the new control parameter eliminates the small zones of stability that cannot be observed easily. The improved Henon map has been evaluated in terms of randomness quality using a set of mathematical tools, in which good results have been obtained. The improved map is used to design a cryptosystem; we have proposed an efficient method for the aim of ensuring the diffusion property by feeding back the encrypted message to the modified map. So, a very small change on the plaintext leads to a different chaotic state. The proposed encryption scheme has been evaluated in terms of security, in which some known cryptographic attacks have been performed on it. The results showed that the proposed encryption scheme meets the nowadays security requirements. The proposed encryption scheme is evaluated in real-time using low-cost ARM (Cortex-M3 32-bit RISC core) under two different scenarios, emitter, and receiver, which are connected wirelessly. A comparison with some existing solutions showed that our proposal provides a good ratio between randomness, implementation cost, and performance.
- Published
- 2021
36. A novel image encryption algorithm based on least squares generative adversarial network random number generator
- Author
-
Jinqing Li, Zhenlong Man, Xiaoqiang Di, Xingxu Zhang, Xu Liu, Jia Wang, and Jian Zhou
- Subjects
Keyspace ,Computer Networks and Communications ,Random number generation ,Computer science ,business.industry ,Chaotic ,020206 networking & telecommunications ,02 engineering and technology ,Encryption ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Key (cryptography) ,Cryptosystem ,020201 artificial intelligence & image processing ,Randomness tests ,business ,Algorithm ,Software ,Randomness - Abstract
In cryptosystems, the generation of random keys is crucial. The random number generator is required to have a sufficiently fast generation speed to ensure the size of the keyspace. At the same time, the randomness of the key is an important indicator to ensure the security of the encryption system. The chaotic random number generator has been widely used in cryptosystems due to the uncertainty, non-repeatability, and unpredictability of chaotic systems. However, chaotic systems, especially high-dimensional chaotic systems, have slow calculation speed and long iteration time. This caused a conflict between the number of random keys and the speed of generation. In this paper, we introduce the Least Squares Generative Adversarial Networks(LSGAN)into random number generation. Using LSGAN’s powerful learning ability, a novel learning random number generator is constructed. Six chaotic systems with different structures and different dimensions are used as training sets to realize the rapid and efficient generation of random numbers. Experimental results prove that the encryption key generated by this scheme can pass all randomness tests of the National Institute of Standards and Technology (NIST). Hence, our result shows that LSGAN has the potential to improve the quality of the random number generators. Finally, the results are successfully applied to the image encryption scheme based on selective scrambling and overlay diffusion, and good results are achieved.
- Published
- 2021
37. A novel color image encryption method based on an evolved dynamic parameter-control chaotic system
- Author
-
Jie Zhang, Xiangyu Deng, and Baoquan Yin
- Subjects
Sequence ,Computer Networks and Communications ,Computer science ,business.industry ,Chaotic ,Process (computing) ,020207 software engineering ,02 engineering and technology ,Encryption ,Scrambling ,Image (mathematics) ,Nonlinear Sciences::Chaotic Dynamics ,Hardware and Architecture ,ComputerSystemsOrganization_MISCELLANEOUS ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Logistic map ,business ,Algorithm ,Software ,Randomness - Abstract
The majority of existing image encryption algorithms use single chaotic systems, such as Logistic map or Lorentz system, to be the pseudo-random sequence generator. Actually, neither low nor high dimensional chaotic system can get rid of pseudo-randomness deterioration of chaotic sequence calculated by limited precision computer. Additionally, most of them use only one chaotic pseudo-random sequence throughout the encryption process. These are the more obvious deficiencies. In the current paper, a novel compound chaotic system is applied to color images domain to solve the mentioned problems. Dynamic parameter-control chaotic system can enhance the sequence’s randomness after digitalizing. Corresponding the different sequences generated by the novel chaotic system to each color channel of image is a helpful method to reduce the image’s statistical characteristics in the scrambling process. Finally, the effectiveness and security of the proposed encryption has been illustrated by the experimental, and at the meantime, excellent performance is also demonstrated.
- Published
- 2021
38. Using known nonself samples to improve negative selection algorithm
- Author
-
Zhiyong Li and Tao Li
- Subjects
Negative selection algorithm ,Blindness ,Physics::Instrumentation and Detectors ,Artificial immune system ,Computer science ,Feature vector ,Detector ,02 engineering and technology ,medicine.disease ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,High Energy Physics::Experiment ,020201 artificial intelligence & image processing ,Detection rate ,Algorithm ,Randomness - Abstract
Negative selection algorithm is the core algorithm of artificial immune system. It only uses the self for training and generates detectors to detect abnormalities. Holes are feature space areas that the detector fails to cover, it is the root cause of the performance degradation of the negative selection algorithm. The conventional method generates a large number of detectors randomly to repair the holes, which is time-consuming and not effective. To alleviate the problem, we propose a V-Detector-KN algorithm in this paper. V-Detector is the abbreviation of the real-valued negative selection algorithm with Variable-sized Detectors, KN represents Known Nonself. The V-Detector-KN algorithm uses the known nonself as the candidate detector to further generate the detector based on the V-Detector randomly generated detector, so as to realize the repair of holes. Compared with the conventional method to randomly generate detectors to repair holes, our proposed V-Detector-KN method uses known nonself to repair holes, reducing the randomness and blindness of hole repair. Theoretical analysis shows that the detection rate of our algorithm is not lower than that of the conventional V-Detector algorithm. The results of experiment comparing with other 6 algorithms on 7 UCI data sets show the superiority of our proposed algorithm.
- Published
- 2021
39. Simulation of a Random Variable and its Application to Game Theory
- Author
-
Mehrdad Valizadeh and Amin Gohari
- Subjects
021110 strategic, defence & security studies ,General Mathematics ,0211 other engineering and technologies ,020206 networking & telecommunications ,02 engineering and technology ,Management Science and Operations Research ,Measure (mathematics) ,Computer Science Applications ,Total variation ,Rate of convergence ,0202 electrical engineering, electronic engineering, information engineering ,Repeated game ,Side information ,Algorithm ,Random variable ,Game theory ,Randomness ,Mathematics - Abstract
We provide a new tool for simulation of a random variable (target source) from a randomness source with side information. Considering the total variation distance as the measure of precision, this tool offers an upper bound for the precision of simulation, which is vanishing exponentially in the difference of Rényi entropies of the randomness and target sources. This tool finds application in games in which the players wish to generate their actions (target source) as a function of a randomness source such that they are almost independent of the observations of the opponent (side information). In particular, we study zero-sum repeated games in which the players are restricted to strategies that require only a limited amount of randomness. Let be the max-min value of the n stage game. Previous works have characterized [Formula: see text], that is, the long-run max-min value, but they have not provided any result on the value of v n for a given finite n-stage game. Here, we utilize our new tool to study how v n converges to the long-run max-min value.
- Published
- 2021
40. Blue Noise Plots
- Author
-
Tobias Ritschel, Christian van Onzenoodt, Timo Ropinski, and Gurprit Singh
- Subjects
Univariate ,020207 software engineering ,02 engineering and technology ,Computer Graphics and Computer-Aided Design ,Plot (graphics) ,Data point ,Dimension (vector space) ,Colors of noise ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Point (geometry) ,Algorithm ,Randomness ,Mathematics ,Jitter - Abstract
We propose Blue Noise Plots, two-dimensional dot plots that depict data points of univariate data sets. While often one-dimensional strip plots are used to depict such data, one of their main problems is visual clutter which results from overlap. To reduce this overlap, jitter plots were introduced, whereby an additional, non-encoding plot dimension is introduced, along which the data point representing dots are randomly perturbed. Unfortunately, this randomness can suggest non-existent clusters, and often leads to visually unappealing plots, in which overlap might still occur. To overcome these shortcomings, we introduce BlueNoise Plots where random jitter along the non-encoding plot dimension is replaced by optimizing all dots to keep a minimum distance in 2D i. e., Blue Noise. We evaluate the effectiveness as well as the aesthetics of Blue Noise Plots through both, a quantitative and a qualitative user study. The Python implementation of Blue Noise Plots is available here.
- Published
- 2021
41. Neural network aided approximation and parameter inference of non-Markovian models of gene expression
- Author
-
Ramon Grima, Xiaoming Fu, Runlai Li, Feng Qian, Zhixing Cao, Shifu Yan, Wenli Du, and Qingchao Jiang
- Subjects
0301 basic medicine ,Transcription, Genetic ,ComputingMethodologies_SIMULATIONANDMODELING ,Computer science ,Science ,General Physics and Astronomy ,Markov process ,Inference ,Parameter space ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Computer Simulation ,Randomness ,Feedback, Physiological ,Stochastic Processes ,Multidisciplinary ,Mathematical model ,Artificial neural network ,Models, Genetic ,Stochastic process ,General Chemistry ,Kinetics ,030104 developmental biology ,ComputingMethodologies_PATTERNRECOGNITION ,Gene Expression Regulation ,symbols ,Neural Networks, Computer ,Algorithm ,030217 neurology & neurosurgery ,Curse of dimensionality - Abstract
Non-Markovian models of stochastic biochemical kinetics often incorporate explicit time delays to effectively model large numbers of intermediate biochemical processes. Analysis and simulation of these models, as well as the inference of their parameters from data, are fraught with difficulties because the dynamics depends on the system’s history. Here we use an artificial neural network to approximate the time-dependent distributions of non-Markovian models by the solutions of much simpler time-inhomogeneous Markovian models; the approximation does not increase the dimensionality of the model and simultaneously leads to inference of the kinetic parameters. The training of the neural network uses a relatively small set of noisy measurements generated by experimental data or stochastic simulations of the non-Markovian model. We show using a variety of models, where the delays stem from transcriptional processes and feedback control, that the Markovian models learnt by the neural network accurately reflect the stochastic dynamics across parameter space. Cells are complex systems that make decisions biologists struggle to understand. Here, the authors use neural networks to approximate the solution of mathematical models that capture the history and randomness of biochemical processes in order to understand the principles of transcription control.
- Published
- 2021
42. A systematic identification approach for biaxial piezoelectric stage with coupled Duhem-type hysteresis
- Author
-
Zhumu Fu, Qun Chen, and Zong Xiao Yang
- Subjects
Coupling ,0209 industrial biotechnology ,Correctness ,Computer science ,Applied Mathematics ,02 engineering and technology ,Piezoelectricity ,Computer Science Applications ,Identification (information) ,Range (mathematics) ,Hysteresis ,020901 industrial engineering & automation ,Computational Theory and Mathematics ,Differential evolution ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Algorithm ,Randomness - Abstract
Purpose The problem of parameter identification for biaxial piezoelectric stages is still a challenging task because of the existing hysteresis, dynamics and cross-axis coupling. This study aims to find an accurate and systematic approach to tackle this problem. Design/methodology/approach First, a dual-input and dual-output (DIDO) model with Duhem-type hysteresis is proposed to depict the dynamic behavior of the biaxial piezoelectric stage. Then, a systematic identification approach based on a modified differential evolution (DE) algorithm is proposed to identify the unknown parameters of the Duhem-type DIDO model for a biaxial piezostage. The randomness and parallelism of the modified DE algorithm guarantee its high efficiency. Findings The experimental results show that the characteristics of the biaxial piezoelectric stage can be identified with adequate accuracy based on the input–output data, and the peak-valley errors account for 2.8% of the full range in the X direction and 1.5% in the Y direction. The attained results validated the correctness and effectiveness of the presented identification method. Originality/value The classical DE algorithm has many adjustment parameters, which increases the inconvenience and difficulty of using in practice. The parameter identification of Duhem-type DIDO piezoelectric model is rarely studied in detail and its successful application based on DE algorithm on a biaxial piezostage is hitherto unexplored. To close this gap, this work proposed a modified DE-based systematic identification approach. It not only can identify this complicated model with more parameters, but also has little tuning parameters and thus is easy to use.
- Published
- 2021
43. Skewed normal cloud modified whale optimization algorithm for degree reduction of S-λ curves
- Author
-
Peng Xu, Fengqun Zhao, Fang Dai, Wenyan Guo, and Ting Liu
- Subjects
Skew normal distribution ,Computer science ,Skew ,02 engineering and technology ,Reduction (complexity) ,Artificial Intelligence ,Position (vector) ,Skewness ,Test set ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Algorithm ,Randomness ,Membership function - Abstract
Whale optimization algorithm (WOA) is a new meta-heuristic algorithm for mathematical description of the foraging behavior of whales. Cloud model (CM) is an effective model to describe cognitive uncertainty, at the same time, it can comprehensively describe the randomness and fuzziness in uncertain phenomena. Skew normal distribution (SND) can better describe the biological behavior situation when the environment changes. In this paper, a new skew normal cloud model (SNCM) is established in the first place by combining the CM with the SND and the skew normal membership function to describe the fuzziness and randomness of environmental change. Secondly,because of the randomness and fuzziness of the whales’ foraging behavior, in order to improve the exploration and exploitation ability of WOA,an emendatory WOA based on the skew normal cloud (SNC) is proposed through the use of the SNCM to modify the shrinking encircling and spiral update strategies of WOA, and the adaptive position and skewness parameters in the SNC are designed to increase the exploration capability in prophase and the exploitation capability in anaphase. Lastly,the experimental results of the complex CEC2017 test set verify the effectiveness of the emendatory WOA under different strategies and different dimensions and different improved WOA algorithm and other representative heuristic algorithms. Three degree reduction of S − λ curves verify the practicability of the the modified WOA in the field of curve design.
- Published
- 2021
44. EXPLORATORY SPECTRAL ANALYSIS IN THREE-DIMENSIONAL SPATIAL POINT PATTERNS
- Author
-
Edmary Silveira Barreto Araújo, Lurimar Smera Batista, and João Domingos Scalon
- Subjects
Statistics and Probability ,Epidemiology ,Computer science ,Applied Mathematics ,Public Health, Environmental and Occupational Health ,Point pattern analysis ,Point process ,Autocovariance ,Frequency domain ,Point (geometry) ,Spatial dependence ,General Agricultural and Biological Sciences ,Cluster analysis ,Algorithm ,Randomness - Abstract
A spatial point pattern is a collection of points irregularly located within a bounded area (2D) or space (3D) that have been generated by some form of stochastic mechanism. Examples of point patterns include locations of trees in a forest, of cases of a disease in a region, or of particles in a microscopic section of a composite material. Spatial Point pattern analysis is used mostly to determine the absence (completely spatial randomness) or presence (regularity and clustering) of spatial dependence structure of the locations. Methods based on the space domain are widely used for this purpose, while methods conducted in the frequency domain (spectral analysis) are still unknown to most researchers. Spectral analysis is a powerful tool to investigate spatial point patterns, since it does not assume any structural characteristics of the data (ex. isotropy), and uses only the autocovariance function, and its Fourier transform. There are some methods based on the spectral frameworks for analyzing 2D spatial point patterns. There is no such methods available for the 3D situation and, therefore, the aim of this work is to develop new methods based on spectral framework for the analysis of three-dimensional point patterns. The emphasis is on relating periodogram structure to the type of stochastic process which could have generated a 3D observed pattern. The results show that the methods based on spectral analysis developed in this work are able to identify patterns of three typical three-dimensional point processes, and can be used, concurrently, with analyzes in the space domain for a better characterization of spatial point patterns.
- Published
- 2021
45. Simultaneous‐source deblending using adaptive coherence‐constrained dictionary learning and sparse approximation
- Author
-
Weijian Mao and E. Isaac Evinemi
- Subjects
Computer science ,business.industry ,Noise reduction ,Coherence (statistics) ,Sparse approximation ,Data recovery ,Constraint (information theory) ,Noise ,Geophysics ,Geochemistry and Petrology ,Dither ,business ,Algorithm ,Randomness - Abstract
The dictionary learning and sparse approximation method using the K‐singular value decomposition algorithm rely on the knowledge of the sparsity or noise variance as a constraint when it is used for data denoising. However, the determination of the sparsity or noise variance of seismic data can be tricky and sometimes unknown, especially in seismic field data. Thus, where the cardinality or the noise variance is not known, the intrinsic character of the relative coherence between the removed noise from noisy data and its learned dictionary is instead used as a constraint for the sparse approximation of simultaneous‐source seismic data. The dictionary learning is obtained using a modified orthogonal matching pursuit algorithm which uses coherence as a constraint and is referred to as coherence dictionary learning. The coherence dictionary learning is then adapted to handle the simultaneous‐source seismic data deblending. A blending structure with random time dithering of sequential source shooting is used to guarantee adequate randomness of the noise. Two‐dimensional overlap patches of the noisy data were extracted from the common receiver gather domain to train the dictionary and to determine the sparse representation of the signal. The method is tested on both synthetic and field data, and it shows adequate data recovery. Comparing the result of this method to the matching pursuit algorithm constrained by the signal sparsity and the noise variance reveals that our approach performs better at noise attenuation and yields a reasonable data recovery especially for strong seismic signal.
- Published
- 2021
46. Prediction of thermal error for feed system of machine tools based on random radial basis function neural network
- Author
-
Tie-jun Li, Ting-ying Sun, Yi-min Zhang, and Chun-yu Zhao
- Subjects
0209 industrial biotechnology ,business.product_category ,Computer science ,Mechanical Engineering ,Computer Science::Neural and Evolutionary Computation ,Experimental data ,Inverse ,02 engineering and technology ,Ball screw ,Industrial and Manufacturing Engineering ,Computer Science Applications ,Machine tool ,020901 industrial engineering & automation ,Control and Systems Engineering ,Moving average ,Genetic algorithm ,Numerical control ,business ,Algorithm ,Software ,Randomness - Abstract
Thermal errors affect the accuracy of computer numerical control machine tools and are produced by the thermal deformation of machine components due to temperature difference between heat source and ambient temperature of the machine tools. At present, most of the literature does not consider the randomness of the influencing factors of thermal error, leading to inaccurate predictions of machine tool thermal error. In this paper, a new inverse random model is proposed through the combination of the stochastic theory, genetic algorithm, and radial basis function neural network (RBFNN), to predict thermal error while considering the randomness of influencing factors. The randomness index of influencing factors can be identified using the inverse random RBFNN (IR-RBFNN). Furthermore, through the combination of the stochastic theory, RBFNN, and the improved exponential moving average method with abnormal data elimination, a new forward random radial basis function neural network (FR-RBFNN) is established according to the identified influencing factor random index. The models are verified through experimental results on a ball screw system. Compared with the traditional methods, the experimental data show that the proposed method provides a more accurate description of thermal errors while incorporating the randomness of factors affecting thermal error.
- Published
- 2021
47. Design of pseudo-random number generator from turbulence padded chaotic map
- Author
-
Balamurugan Balusamy, Vani Rajasekar, Rajesh Kumar Dhanaraj, Sathya Krishnamoorthi, SK Hafizul Islam, and Premalatha Jayapaul
- Subjects
Pseudorandom number generator ,Computer science ,business.industry ,Applied Mathematics ,Mechanical Engineering ,Key space ,Chaotic ,Aerospace Engineering ,Ocean Engineering ,Cryptography ,Cryptographic protocol ,01 natural sciences ,Control and Systems Engineering ,0103 physical sciences ,NIST ,Electrical and Electronic Engineering ,business ,010301 acoustics ,Algorithm ,Randomness ,Statistical hypothesis testing - Abstract
Transmission of the information in any form requires security. Security protocols used for communication rely on the use of random numbers. Pseudo-random numbers are required with good statistical properties and efficiency. The use of a single chaotic map may not produce enough randomness. The turbulence is padded into the existing map to improve its chaotic behaviour and increase the periodicity. A Pseudo-random number generator (PRNG) with this architecture is devised to generate random bit sequences from secret keys. The statistical properties of newly constructed PRNG are tested with NIST SP 800–22 statistical test suite and were shown to have good randomness. To ensure its usability in cryptographic applications, we analysed the size of its key space, key sensitivity, and performance speed. The test results show that the newly designed PRNG has a 3.6% increase in key space and a 5% increase in its performance speed compared to existing chaotic PRNGs. The novel PRNG with faster performance is found suitable for lightweight cryptographic applications.
- Published
- 2021
48. Robustly reusable fuzzy extractor with imperfect randomness
- Author
-
Nan Cui, Shengli Liu, Jian Weng, and Dawu Gu
- Subjects
Pseudorandom number generator ,Ideal (set theory) ,business.industry ,Applied Mathematics ,String (computer science) ,Cryptography ,Fuzzy logic ,Computer Science Applications ,Robustness (computer science) ,business ,Algorithm ,Randomness ,Reusability ,Mathematics - Abstract
Fuzzy extractor (FE) extracts and reproduces a uniform string from a fuzzy source. Robustly reusable fuzzy extractor (rrFE) considers reusability and robustness simultaneously. Reusability of rrFE allows multiple extractions of pseudorandom strings from the same source and robustness detects active attacks. To achieve reusability and robustness, the existing constructions of rrFE make heavy use of perfect random coins (which are uniformly distributed and independent of each other), besides the fuzzy source. However, efficiently sampling unbiased random bits only exists in the ideal world. In this paper, we show how to construct rrFE resorting to imperfect randomness (non-uniform but of high entropy), which is easy to sample in practice. We propose two generic constructions of rrFE in the CRS model, with one construction dealing with perfect randomness and the other dealing with imperfect randomness. We also present two instantiations of rrFE from the DDH and LPN assumptions working with perfect randomness, and another two instantiations of rrFE from DDH and LPN working with imperfect randomness. All instantiations support linear fraction of errors between samples of the fuzzy source.
- Published
- 2021
49. Multi-step wind speed forecast based on sample clustering and an optimized hybrid system
- Author
-
Zhong-Long Li, Xue-Jun Chen, Xiao-Zhong Jia, and Jing Zhao
- Subjects
060102 archaeology ,Renewable Energy, Sustainability and the Environment ,Computer science ,020209 energy ,Feed forward ,06 humanities and the arts ,02 engineering and technology ,Hilbert–Huang transform ,Wind speed ,Hybrid system ,0202 electrical engineering, electronic engineering, information engineering ,0601 history and archaeology ,Cuckoo search ,Cluster analysis ,Algorithm ,Physics::Atmospheric and Oceanic Physics ,Randomness ,Extreme learning machine - Abstract
At present, accurate forecast of very-short-term wind speed is still a critical issue, mainly due to the complex characteristics of wind variations such as intermittence, fluctuation and randomness. On this topic, our paper contributes to the development of an effective multi-step forecasting method termed ECKIE, which provides multi-step forecast for the very-short-term wind speed in specific stations. This method consists of three stages: a data filtering process driven by the ensemble empirical mode decomposition (EEMD), an improved K-harmonic mean (KHM) clustering optimized by the Cuckoo search (CS) algorithm and a single-hidden-layer feedforward network (SLFN) trained by the incremental extreme learning machine (IELM) algorithm. The developed method is capable of clustering the model inputs into groups according to their characteristics and of constructing the models for each group. It is further capable of reducing forecasting errors by choosing a suitable model. It is a purely data-driven process and is an effective method for very-short-term wind speed forecasts. The simulation demonstrates that the developed method drastically improves upon original model performance and performs the best among comparable models.
- Published
- 2021
50. Generalized Polarimetric Entropy: Polarimetric Information Quantitative Analyses of Model-Based Incoherent Polarimetric Decomposition
- Author
-
Mingsen Lin and Wentao An
- Subjects
Synthetic aperture radar ,Astrophysics::Instrumentation and Methods for Astrophysics ,0211 other engineering and technologies ,Polarimetry ,02 engineering and technology ,Information theory ,Residual ,Matrix decomposition ,Astrophysics::Solar and Stellar Astrophysics ,General Earth and Planetary Sciences ,Entropy (information theory) ,Electrical and Electronic Engineering ,Algorithm ,Residual entropy ,Physics::Atmospheric and Oceanic Physics ,Randomness ,021101 geological & geomatics engineering ,Mathematics - Abstract
Model-based incoherent polarimetric decomposition is a frequently used technique to analyze multilook data of polarimetric synthetic aperture radars (POLSARs). The purpose of this study is to analyze and compare different model-based incoherent polarimetric decomposition algorithms from the polarimetric information change aspect. For the input of a model-based incoherent polarimetric decomposition algorithm, polarimetric entropy was used to represent the polarimetric information of a coherency matrix. For the output of a model-based incoherent polarimetric decomposition algorithm, there are usually several decomposed components. To quantitatively represent their total polarimetric information, a new concept, generalized polarimetric entropy, was proposed which generalized the concept of polarimetric entropy based on the information entropy additivity of information theory. Generalized polarimetric entropy consists of two parts named as polarimetric power entropy and polarimetric residual entropy, respectively. Polarimetric power entropy describes the distribution status of the Span values of all decomposed components. Polarimetric residual entropy represents the residual randomness of all decomposed components. With the three new concepts, eight model-based incoherent polarimetric decomposition algorithms were compared and analyzed. Two real POLSAR images, respectively, derived by the E-SAR airborne system of Germany and the GF-3 satellite of China were used for the experiments. Experimental results had illustrated several useful conclusions.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.