5,496 results on '"Randomness"'
Search Results
2. An Efficient E2E Crowd Verifiable E-Voting System
- Author
-
Xinyu Zhang, Bingsheng Zhang, Thomas Zacharias, Aggelos Kiayias, and Kui Ren
- Subjects
Theoretical computer science ,business.industry ,Electronic voting ,Computer science ,media_common.quotation_subject ,Hash function ,Cryptography ,Random oracle ,Ballot ,Voting ,Verifiable secret sharing ,Electrical and Electronic Engineering ,business ,Randomness ,media_common - Abstract
Electronic-voting(e-voting), compared with paper voting, has advantages in several aspects. Among those benefits, the ability to audit the electoral process at every stage is one of the most desired features. In Eurocrypt2015, Kiayias et al. proposed an E2E verifiable e-voting system which first provides E2E verifiability without relying on external sources of randomness or the random oracle model. The main advantage of the system is that election auditors need only the election transcript and the feedback from the voters to pronounce the election process unequivocally valid. Unfortunately, their system comes with a huge performance and storage penalty for the election authority (EA) compared to other e-voting systems such as Helios. The reason is the EA forms the proof of tally results. It is required to precompute several ciphertexts. The performance penalty on the EA appears to be intrinsic: voters cannot compute an enciphered ballot themselves because it seems unprovable. In this work, we construct a new e-voting system that retains strong E2E characteristics while eliminating the performance and storage penalty of the EA. Our construction is practical and has a similar performance to Helios. The privacy of our construction relies on the SXDH assumption over bilinear groups via complexity leveraging.
- Published
- 2022
3. A Capacity-Price Game for Uncertain Renewables Resources
- Author
-
Shreyas Sekar, Baosen Zhang, and Pan Li
- Subjects
FOS: Computer and information sciences ,Control and Optimization ,020209 energy ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,7. Clean energy ,Scheduling (computing) ,Microeconomics ,symbols.namesake ,Electric power system ,Computer Science - Computer Science and Game Theory ,0502 economics and business ,FOS: Mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Economics ,050207 economics ,Mathematics - Optimization and Control ,Randomness ,0105 earth and related environmental sciences ,Renewable Energy, Sustainability and the Environment ,business.industry ,05 social sciences ,Bidding ,Investment (macroeconomics) ,Social planner ,Renewable energy ,Computational Theory and Mathematics ,Optimization and Control (math.OC) ,Nash equilibrium ,Hardware and Architecture ,symbols ,business ,Software ,Computer Science and Game Theory (cs.GT) ,Renewable resource - Abstract
Renewable resources are starting to constitute a growing portion of the total generation mix of the power system. A key difference between renewables and traditional generators is that many renewable resources are managed by individuals, especially in the distribution system. In this paper, we study the capacity investment and pricing problem, where multiple renewable producers compete in a decentralized market. It is known that most deterministic capacity games tend to result in very inefficient equilibria, even when there are a large number of similar players. In contrast, we show that due to the inherent randomness of renewable resources, the equilibria in our capacity game becomes efficient as the number of players grows and coincides with the centralized decision from the social planner's problem. This result provides a new perspective on how to look at the positive influence of randomness in a game framework as well as its contribution to resource planning, scheduling, and bidding. We validate our results by simulation studies using real world data., Appears in IEEE Transactions on Sustainable Computing
- Published
- 2022
4. Two-Dimensional Parametric Polynomial Chaotic System
- Author
-
Zhongyun Hua, Han Bao, Yicong Zhou, and Yongyong Chen
- Subjects
Pseudorandom number generator ,Polynomial ,business.industry ,Computer science ,Chaotic ,Lyapunov exponent ,Modular design ,Computer Science Applications ,Nonlinear Sciences::Chaotic Dynamics ,Human-Computer Interaction ,CHAOS (operating system) ,symbols.namesake ,Control and Systems Engineering ,symbols ,Applied mathematics ,Electrical and Electronic Engineering ,business ,Software ,Randomness ,Parametric statistics - Abstract
When used in engineering applications, most existing chaotic systems may have many disadvantages, including discontinuous chaotic parameter ranges, lack of robust chaos, and easy occurrence of chaos degradation. In this article, we propose a two-dimensional (2-D) parametric polynomial chaotic system (2D-PPCS) as a general system that can yield many 2-D chaotic maps with different exponent coefficient settings. The 2D-PPCS initializes two parametric polynomials and then applies modular chaotification to the polynomials. Setting different control parameters allows the 2D-PPCS to customize its Lyapunov exponents in order to obtain robust chaos and behaviors with desired complexity. Our theoretical analysis demonstrates the robust chaotic behavior of the 2D-PPCS. Two illustrative examples are provided and tested based on numeral experiments to verify the effectiveness of the 2D-PPCS. A chaos-based pseudorandom number generator is also developed to illustrate the applications of the 2D-PPCS. The experimental results demonstrate that these examples of the 2D-PPCS can achieve robust and desired chaos, have better performance, and generate higher randomness pseudorandom numbers than some representative 2-D chaotic maps.
- Published
- 2022
5. Fault detection and diagnosis with a novel source-aware autoencoder and deep residual neural network
- Author
-
Qinqin Zhu and Nima Amini
- Subjects
Flexibility (engineering) ,Computer science ,business.industry ,Cognitive Neuroscience ,Deep learning ,Process (computing) ,Pattern recognition ,02 engineering and technology ,021001 nanoscience & nanotechnology ,Fault (power engineering) ,Autoencoder ,Fault detection and isolation ,Computer Science Applications ,020401 chemical engineering ,Artificial Intelligence ,Artificial intelligence ,0204 chemical engineering ,0210 nano-technology ,Precision and recall ,business ,Randomness - Abstract
The capability of deep learning (DL) techniques for dealing with non-linear, dynamic and correlated data has paved the way for DL-based fault detection and diagnosis (FDD). Among them, autoencoders (AEs) have shown their potential to serve as the fault detection network. However, misclassifying faulty samples that share similar patterns to normal samples is a common drawback of AEs. In this work, a source-aware autoencoder (SAAE) is proposed as an extension of AEs to incorporate faulty samples in the training stage. In SAAE, flexibility in tuning recall and precision trade-off, ability to detect unseen faults and applicability in imbalanced data sets are achieved. Bidirectional long short-term memory (BiLSTM) with skip connections SAAE is designed as the structure of the fault detection network. Further, a deep network with BiLSTM and residual neural network (ResNet) is proposed for the subsequent fault diagnosis step to avoid randomness imposed by the order of the input features. A framework for combining fault detection and fault diagnosis networks is also presented without the assumption of having a perfect fault detection network. A comprehensive comparison among relevant existing techniques in the literature and SAAE-ResNet is also conducted on the Tennessee-Eastman process, which shows the superiority of the proposed FDD method.
- Published
- 2022
6. Joint Activity Detection and Channel Estimation in Massive MIMO Systems With Angular Domain Enhancement
- Author
-
Lei Sun, Wei Chen, Bo Ai, and Han Xiao
- Subjects
business.industry ,Computer science ,Applied Mathematics ,MIMO ,Bayesian inference ,Computer Science Applications ,Compressed sensing ,Code (cryptography) ,Overhead (computing) ,Wireless ,Electrical and Electronic Engineering ,business ,Algorithm ,Randomness ,Communication channel - Abstract
To support massive connectivity for sporadically active devices is a challenging task, as the randomness of the channel and the large number of users lead to enormous increase of communication overhead. Different to the existing methods that differentiate users in resources including time, frequency and code, we propose a new joint activity detection and channel estimation framework for massive multiple-input multiple-output (MIMO) systems, where angular domain information of active users is exploited to enhance activity detection and channel estimation. By exploiting the sporadic activity of users and the angular spread of the wireless signals, the activity detection and channel estimation is formulated as a compressive sensing problem with multiple measurement vectors, which has a simultaneously row-sparse and clustered sparse structure. The sizes and positions of the nonzero clusters are arbitrary, which brings new challenges for algorithm derivation. To this end, we develop new algorithms based on sparse Bayesian learning, where novel hyper-priors are proposed to capture the structural signal characteristics, and appropriate approximations are employed to facilitate algorithm derivations. Numerical experiments demonstrate the improved activity detection and channel estimation performance of the proposed approach in comparison to the existing methods.
- Published
- 2022
7. Design for Test With Unreliable Memories by Restoring the Beauty of Randomness
- Author
-
Marco Widmer, Reza Ghanaatian, and Andreas Burg
- Subjects
reliability ,Computer science ,business.industry ,Design for testing ,media_common.quotation_subject ,approximate computing ,random access memory ,statistics ,Hardware and Architecture ,nanometer nodes ,Beauty ,embedded systems ,reliability engineering ,faulty memories ,measurement ,quality-yield analysis ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,circuit faults ,Software ,Randomness ,media_common - Abstract
This article presents a design-for-test methodology for embedded memories. The methodology relies on a fully random fault model of post-fabrication errors, which results in a low-overhead test strategy. The methodology's effectiveness is demonstrated on an embedded system with faulty memories.
- Published
- 2022
8. A Novel Ultra-Compact FPGA-Compatible TRNG Architecture Exploiting Latched Ring Oscillators
- Author
-
Giuseppe Scotti, Riccardo Della Sala, and Davide Bellizia
- Subjects
Random number generation ,Computer science ,business.industry ,Entropy (information theory) ,Ring oscillator ,Electrical and Electronic Engineering ,business ,Field-programmable gate array ,Throughput (business) ,Computer hardware ,Randomness ,Jitter ,Voltage - Abstract
In this paper we present a novel, ultra-compact, True Random Number Generator (TRNG) architecture and its FPGA implementation. The proposed Latched Ring Oscillator (LRO) TRNG allows the generation of a TRNG bit from a single FPGA Slice. Despite its very compact structure, the proposed LRO-TRNG relies on both meta-stability and accumulated jitter as entropy sources, and exhibits very good results in terms of unpredictability and randomness. The proposed architecture has been implemented on Xilinx Spartan-6 devices and the TRNG performances have been extensively validated under supply voltage and temperature variations. Measurements results have shown that the LRO-TRNG exhibits an estimated entropy of about 7.99834 per bit (according to T8 test of the AIS-31) and a throughput of 0.76 Mbits/s with a 50MHz clock. A comparison against the state of the art shows that the proposed LRO-TRNG outperforms most of the previously published TRNGs, in terms of the ratio between throughput and FPGA resources usage.
- Published
- 2022
9. Improving Randomness of Symmetric Encryption for Consumer Privacy Using Metaheuristic-Based Framework
- Author
-
Min-Yan Tsai, Hsin-Hung Cho, Hsin-Te Wu, Chi-Yuan Chen, and Fan-Hsun Tseng
- Subjects
Human-Computer Interaction ,Theoretical computer science ,Symmetric-key algorithm ,Hardware and Architecture ,business.industry ,Computer science ,Consumer privacy ,Electrical and Electronic Engineering ,business ,Metaheuristic ,Randomness ,Computer Science Applications - Published
- 2022
10. Leveraging simulated and empirical data-driven insight to supervised-learning for porosity prediction in laser metal deposition
- Author
-
Vani Singh, Vidita Gawade, and Weihong 'Grace' Guo
- Subjects
Materials science ,business.industry ,3D printing ,Industrial and Manufacturing Engineering ,Finite element method ,law.invention ,Heat flux ,Hardware and Architecture ,Control and Systems Engineering ,law ,Heat transfer ,Thermal ,Porosity ,business ,Biological system ,Software ,Randomness ,Pyrometer - Abstract
The advent of digital-twin manufacturing in additive manufacturing (AM) is to integrate the physical world of real-time 3D printing with the digital world of a simulated print. This paper contributes to digital-twin manufacturing in laser-based additive manufacturing by combining melt pools’ simulated thermal behavior via finite element analysis (FEA) and melt pools’ empirical thermal behavior via pyrometry-based sensors. Studying the thermal behavior of melt pools based on heat transfer characteristics determines melt pools’ porosity and part quality. FEA uses Godak’s moving heat flux to capture the melt pools’ physically bound temperature profile in three dimensions. Simulated data helps to mitigate the influence of measuring errors from real-world data and provides non-observable data such as gradient changes of thermal behavior at the curvature of the 3D melt pool. The pyrometer captures empirical temperature behavior, including uncertainty and randomness introduced to the process. A significant knowledge gap exists when predicting melt pool porosity accurately with theoretical FEA and empirical in situ evidence alone. The gap is bridged by combining the data sources, specifically, feature engineering via functional principal component analysis (empirical data source) and capturing the melt pool's 3-D temperature shape profile via FEA (simulated data source). A hybrid model predicts melt pool porosity by capturing the strengths of prior simulated and posterior in situ empirical data by matching simulated melt pools to real-world empirical melt pools. Moreover, comparing predicted porosity labels with true porosity labels of Ti–6Al–4V thin-wall structure from laser metal deposition verified the proposed interpretable and robust supervised-learning model's validity. This methodology can apply to other materials and part shapes printed under various additive-manufactured printers.
- Published
- 2022
11. Estimation of Aleatory Randomness by Sa(T1)-Based Intensity Measures in Fragility Analysis of Reinforced Concrete Frame Structures
- Author
-
Baoyin Sun, Yongan Shi, Yantai Zhang, and Zheng Wang
- Subjects
Estimation ,Fragility ,business.industry ,Modeling and Simulation ,Frame (networking) ,Structural engineering ,business ,Reinforced concrete ,Software ,Intensity (heat transfer) ,Randomness ,Computer Science Applications ,Mathematics - Published
- 2022
12. Short-Term Traffic Flow Forecasting Using Ensemble Approach Based on Deep Belief Networks
- Author
-
Jin Liu, Zhiwu Li, Naiqi Wu, and Yan Qiao
- Subjects
Ensemble forecasting ,Computer science ,business.industry ,Mechanical Engineering ,Feature selection ,Machine learning ,computer.software_genre ,Hilbert–Huang transform ,Computer Science Applications ,Scheduling (computing) ,Deep belief network ,Automotive Engineering ,Artificial intelligence ,Performance improvement ,business ,Intelligent transportation system ,computer ,Randomness - Abstract
Transportation services play an increasingly significant role for people's daily lives and bring a lot of benefits to individuals and economic development. The randomness and volatility of traffic flows, however, constrains the effective provision of transportation services to a certain extent. Precise traffic flow forecasting becomes the key and primary task to realize the stability of intelligent transport systems and ensure efficient scheduling of traffic. This paper investigates the application of an ensemble approach based on deep belief networks for short-term traffic flow forecasting. Traffic flow data, collected from the real world, is decomposed into several Intrinsic Mode Functions (IMFs) and a residue with EEMD (Ensemble Empirical Mode Decomposition). Then, for each component, the essential feature subset is extracted by the mRMR (minimum Redundancy Maximum Relevance Feature Selection) method considering weather conditions and day properties. Furthermore, each component is trained by DBN (Deep belief networks) and their forecasting results are summed up as the output of the ensemble model at last. Results indicate that the proposed approach achieves significant performance improvement over the single DBN and other selected methods.
- Published
- 2022
13. Degradation reliability modeling for two-stage degradation ball screws
- Author
-
Zhou Huaxi, Hu-Tian Feng, Jing-Lun Xie, Xiao-Yi Wang, and Chang-Guang Zhou
- Subjects
business.product_category ,Computer science ,business.industry ,General Engineering ,Structural engineering ,Machine tool ,Numerical control ,Ball (bearing) ,Stage (hydrology) ,Degradation test ,business ,Reliability (statistics) ,Randomness ,Degradation (telecommunications) - Abstract
The performance of ball screws degrades with two-stage features due to the deteriorating inner mechanism and environmental stress, which lowers the positioning accuracy of the ball screws. It thus affects the accuracy of computer numerical control machine tools. However, no research has been carried out on the degradation reliability for ball screws and current studies usually treat the performance degradation of ball screws as a deterministic, single-stage degradation process, ignoring the randomness and two-stage features in the degradation progress. To this end, we firstly formulated a framework of degradation reliability for ball screws, and then established a two-stage performance degradation model with eight unknown parameters. To estimate the unknown parameters, we proposed a two-phase, expectation-maximization estimation method. Finally, we established a degradation reliability model based on the degradation test data and validated the model through experimental data, which shows the model can capture the real-degradation behavior of ball screws.
- Published
- 2022
14. Medical Image Encryption Scheme Using Multiple Chaotic Maps
- Author
-
Prabhakar Krishnan, Aravind Aji, and Kurunandan Jain
- Subjects
Pixel ,business.industry ,Computer science ,Chaotic ,Encryption ,Image (mathematics) ,Digital image ,Computer engineering ,Transmission (telecommunications) ,Artificial Intelligence ,Signal Processing ,Redundancy (engineering) ,Computer Vision and Pattern Recognition ,business ,Software ,Randomness ,Computer Science::Cryptography and Security - Abstract
Telemedicine and various tele-medicinal applications are revolutionizing a variety of healthcare departments through its innovative means of remote diagnosing and faster first aid administration. Digital images play an important role in these applications for ensuring better and faster health carfigure. These digital images, which usually contain confidential and diagnostic information about the patients, are usually transmitted through public networks among hospitals, doctors and patients. Consequently, there is a need to secure them while being stored and in transit to guarantee the privacy of the patient. However, unique properties of digital image data, such as high redundancy and correlation between the pixels of the image and their large size, make the conventional cryptographical algorithms insufficient for ensuring proper security while encrypting them. As the conventional algorithms seize to be a reliable solution, the need to develop improved and dedicated image encryption algorithms also arises. Chaotic systems are systems that appear random and unpredictable from the outside but are governed by deterministic equations or rules on the inside. Due to these properties, along with ergodicity and its heightened sensitivity to the initial conditions, chaotic systems are devised to be one of the best candidates for securing the storage and transmission of digital images. However, the security of a chaotic image encryption system depends upon the chaotic behavior demonstrated by the chaotic map applied in the image encryption system. Because of this, different attacks can be used to break a chaotic encryption system if the scheme is not well-structured. This paper introduces a chaotic image encryption scheme that incorporates two chaotic maps, namely, Arnold's Cat Map and 2D Logistic-Sine-Coupling Map(2D-LSCM), for increased randomness and security of the encrypted image. We also analyze the performance and security of our scheme and compare it with other prominent chaotic image encryption schemes.
- Published
- 2021
15. High-Security Sequence Design for Differential Frequency Hopping Systems
- Author
-
Zan Li, Jia Shi, Rui Chen, Lei Guan, and Lie-Liang Yang
- Subjects
Sequence ,Computer Networks and Communications ,business.industry ,Computer science ,Hash function ,Encryption ,Computer Science Applications ,Public-key cryptography ,Control and Systems Engineering ,Frequency-hopping spread spectrum ,Standard algorithms ,Affine transformation ,Electrical and Electronic Engineering ,business ,Algorithm ,Randomness ,Computer Science::Cryptography and Security ,Information Systems - Abstract
Differential frequency hopping (DFH) technique is widely used in wireless communications by exploiting its capabilities of mitigating tracking interference and confidentiality. However, electronic attacks in wireless systems become more and more rigorous, which imposes a lot of challenges on the DFH sequences designed based on the linear congruence theory, fuzzy and chaotic theory, etc. In this article, we investigate the sequence design in DFH systems by exploiting the equivalence principle between the G-function algorithm and the encryption algorithm, in order to achieve high security. In more details, first, the novel G-function is proposed with the aid of the Government Standard algorithm and the Rivest–Shamir–Adleman algorithm. Then, two sequence design algorithms are proposed, namely, the G-function-assisted sequence generation algorithm, which takes the full advantages of the symmetric and asymmetric encryption algorithms, and the high-order G-function-aided sequence generation algorithm, which is capable of enhancing the correlation of the elements in a DFH sequence. Moreover, the security and ergodicity performance of the proposed algorithms are analyzed. Our studies and results show that the DFH sequences generated by the proposed algorithms significantly outperform the sequences generated by the reversible hash algorithm and affine transformation in terms of the uniformity, randomness, complexity, and the security.
- Published
- 2021
16. Global damage model for the seismic reliability analysis of a base‐isolated structure
- Author
-
Xiaoning Huang and Ning Wang
- Subjects
business.industry ,Structure (category theory) ,Probability density function ,Building and Construction ,Structural engineering ,Quadratic equation ,Skewness ,Architecture ,Kurtosis ,Sample space ,Safety, Risk, Reliability and Quality ,business ,Randomness ,Reliability (statistics) ,Civil and Structural Engineering ,Mathematics - Abstract
A method for seismic reliability analysis based on a global damage model is proposed for a base-isolated structure. A sample space is created using the Latin Hypercube Sampling method, in which the randomness of structure and seismic ground motions is considered, and then dynamic elastic–plastic analysis is carried out for each sample. The cumulative damage factors of the upper structure and the isolated story are calculated using a global damage model of an isolated structure. The probability of progressive sideway collapse for base-isolated structure under earthquake action is obtained using the quadratic fourth-moment method, and this method is also used to analyze the evolution of the probability density of cumulative damage for structure. From aforementioned analysis, the change tendency of damage factor over time can be obtained using the global seismic damage model based on the quadratic fourth-moment method. Due to the introduction of the quadratic fourth-moment method in this paper, the impacts of skewness coefficient and kurtosis coefficient on the failure probability of structure can be taken into account, if the scheme of a structural parameter cannot be confirmed. The method provides results that are more accurate than those of the classical first-order second-moment method. A method for the seismic reliability analysis of an isolated structure from the component to the global structure is also established, in which the failure probabilities of the upper structure and the isolated story are considered with a global seismic damage model.
- Published
- 2021
17. Improve PBFT Based on Hash Ring
- Author
-
Siling Feng, Wang Zhong, Mengxing Huang, Zheng Xiandong, and Wenlong Feng
- Subjects
Technology ,Article Subject ,Computer Networks and Communications ,business.industry ,Computer science ,Node (networking) ,Reliability (computer networking) ,Hash function ,Process (computing) ,TK5101-6720 ,Telecommunication ,Electrical and Electronic Engineering ,Communication complexity ,business ,Byzantine fault tolerance ,Selection (genetic algorithm) ,Randomness ,Information Systems ,Computer network - Abstract
Aiming at the problems of practical Byzantine fault tolerance (PBFT) algorithm, such as high communication complexity, frequent switching views because of Byzantine node become primary nodes and random selection of primary node, HR-PBFT algorithm is proposed. First, the HR-PBFT algorithm uses a hash ring to group nodes, which ensures the randomness and fairness of the grouping. Then, a dual-view mechanism is used in the consensus process, where the first layer node maintains the primary view and the second layer node maintains the secondary view to ensure the proper operation of the algorithm. Finally, the Byzantine node determination mechanism is introduced to evaluate the node status according to the node behavior in the consensus process, improve the reliability of primary node selection, and reduce the frequency of view changes. The experimental results show that the optimized HR-PBFT algorithm can effectively improve the problem of the sharp increase in the number of communications caused by the increase in the number of nodes in the network and prevent frequent view changes.
- Published
- 2021
18. Improved extreme learning machine with AutoEncoder and particle swarm optimization for short-term wind power prediction
- Author
-
Jaouad Boumhidi, Ali Yahyaouy, and Dounia El Bourakadi
- Subjects
Mathematical optimization ,Wind power ,business.industry ,Computer science ,Particle swarm optimization ,Wind power forecasting ,Autoencoder ,Wind speed ,Power (physics) ,Artificial Intelligence ,business ,Software ,Randomness ,Extreme learning machine - Abstract
Wind energy is a green source of electricity that is growing faster than other renewable energies. However, dependent mainly on wind speed, this source is characterized by the randomness and fluctuation that makes challenging optimal management. In order to remedy this inconvenience, it is essential to predict meteorological data or power produced by generators. In this paper, we present a wind power forecasting approach based on regularized extreme learning machine algorithm (R-ELM), particle swarm optimization method (PSO), and AutoEncoder network (AE) so-called AutoEncoder-optimal regularized extreme learning machine (AE-ORELM). Firstly, we train the AE model by the ELM algorithm. Then, the output weights resulting are used as the input weights of the R-ELM model. Furthermore, the PSO method is used to optimally select hyperparameters of the whole model, namely the regularization parameter and the number of hidden nodes in the hidden layer. The simulation results show that the proposed AE-ORELM can achieve better testing accuracy with a faster training time compared to related models.
- Published
- 2021
19. Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning
- Author
-
Zichao Liu, Yi Lu, Xiangzhi Bai, Ying Chen, Peng Wang, Sheng Guo, Junzhang Chen, and Darui Jin
- Subjects
Computer Networks and Communications ,business.industry ,Computer science ,Deep learning ,Process (computing) ,Iterative reconstruction ,Human-Computer Interaction ,Optical path ,Artificial Intelligence ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Representation (mathematics) ,business ,Spatial analysis ,Algorithm ,Software ,Randomness ,Free-space optical communication - Abstract
A turbulent medium with eddies of different scales gives rise to fluctuations in the index of refraction during the process of wave propagation, which interferes with the original spatial relationship, phase relationship and optical path. The outputs of two-dimensional imaging systems suffer from anamorphosis brought about by this effect. Randomness, along with multiple types of degradation, make it a challenging task to analyse the reciprocal physical process. Here, we present a generative adversarial network (TSR-WGAN), which integrates temporal and spatial information embedded in the three-dimensional input to learn the representation of the residual between the observed and latent ideal data. Vision-friendly and credible sequences are produced without extra assumptions on the scale and strength of turbulence. The capability of TSR-WGAN is demonstrated through tests on our dataset, which contains 27,458 sequences with 411,870 frames of algorithm simulated data, physical simulated data and real data. TSR-WGAN exhibits promising visual quality and a deep understanding of the disparity between random perturbations and object movements. These preliminary results also shed light on the potential of deep learning to parse stochastic physical processes from particular perspectives and to solve complicated image reconstruction problems given limited data. Turbulent optical distortions in the atmosphere limit the ability of optical technologies such as laser communication and long-distance environmental monitoring. A new method using adversarial networks learns to counter the physical processes underlying the turbulence so that complex optical scenes can be reconstructed.
- Published
- 2021
20. Towards a high robust neural network via feature matching
- Author
-
Songyang Lao, Yanming Guo, Jian Li, Yingmei Wei, Yulun Wu, and Liang Bai
- Subjects
Similarity (geometry) ,Artificial neural network ,Basis (linear algebra) ,Contextual image classification ,Computer science ,business.industry ,Feature vector ,Pattern recognition ,Library and Information Sciences ,Robustness (computer science) ,Media Technology ,Artificial intelligence ,business ,Randomness ,Feature matching ,Information Systems - Abstract
Image classification systems have been found vulnerable to adversarial attack, which is imperceptible to human but can easily fool deep neural networks. Recent researches indicate that regularizing the network by introducing randomness could greatly improve the model’s robustness against adversarial attack, but the randomness module would normally involve complex calculations and numerous additional parameters and seriously affect the model performance on clean data. In this paper, we propose a feature matching module to regularize the network. Specifically, our model learns a feature vector for each category and imposes additional restrictions on image features. Then, the similarity between image features and category features is used as the basis for classification. Our method does not introduce any additional network parameters than undefended model and can be easily integrated into any neural network. Experiments on the CIFAR10 and SVHN datasets highlight that our proposed module can effectively improve both clean data and perturbed data accuracy in comparison with the state-of-the-art defense methods and outperform the L2P method by 6.3$$\%$$%, 24$$\%$$%on clean and perturbed data, respectively, using ResNet-V2(18) architecture.
- Published
- 2021
21. Fully Synthesizable Unified True Random Number Generator and Cryptographic Core
- Author
-
Massimo Alioto and Sachin Taneja
- Subjects
Key generation ,Computer engineering ,business.industry ,Clock signal ,Computer science ,Random number generation ,Datapath ,Cryptography ,Confusion and diffusion ,Electrical and Electronic Engineering ,Encryption ,business ,Randomness - Abstract
This paper introduces a novel class of architectures that unify true random number generation and private-key cryptography by reusing the cryptographic core for both tasks. The unified architecture is well suited for low-cost constrained secure integrated systems, in view of the inherent area efficiency and the low design effort entailed by conventional automated design flows. Clock pulse over-stretching in pulsed latch clocking generates randomness by inducing metastability and jittered oscillations. Shannon confusion and diffusion in the cryptographic datapath enforce high entropy and robustness against variations. Conventional cryptographic operation is alternatively performed at moderate clock pulsewidths. A 40-nm CMOS testchip demonstrates the proposed unified architecture with a compact area of 0.43 $\cdot 10^{6}~F^{2}$ ( $F\,\,=$ minimum feature size), based on a SIMON cryptographic core. The true random number generator (TRNG) output shows cryptographic-grade quality without any calibration across dice, process (across two manufacturing lots), voltage, and temperature variations. Energy per encryption down to 0.25 pJ/bit is demonstrated. Unification of TRNG and the cryptographic core results in inherent data locality and obfuscation of key generation within logic, improving the resilience to physical attacks.
- Published
- 2021
22. Strength uncertainty analysis of composite turbine blade with small sample size
- Author
-
Jiang Fan, Gaoxiang Chen, Daxiang Liu, and Shaojing Dong
- Subjects
Turbine blade ,business.industry ,Constitutive equation ,0211 other engineering and technologies ,020101 civil engineering ,02 engineering and technology ,Building and Construction ,Interval (mathematics) ,Structural engineering ,0201 civil engineering ,law.invention ,Nonlinear system ,law ,021105 building & construction ,Architecture ,Tolerance interval ,Sensitivity (control systems) ,Safety, Risk, Reliability and Quality ,business ,Randomness ,Uncertainty analysis ,Civil and Structural Engineering ,Mathematics - Abstract
SiC/SiC ceramic matrix composites (CMCs) exhibit significant randomness of damage processes under load. In this study, a simplified simulation method was established based on the macroscopic damage constitutive model, characteristics of fewer test samples, and complex data sources for determining the randomness of the material behavior. Moreover, the established model was verified by introducing an uncertainty parameter to the constitutive model. By considering a typical high-dimensional nonlinear function as a numerical example, the reliability and error of the characterization methods were compared with specimens and studied in terms of the extreme-value interval, tolerance interval, evidence theory, and fuzzy set method with the distribution characteristics of single and mixed parameters. In conjunction with the sensitivity analysis, the randomness of the mechanical behavior was more evident at the damage stage, and the uncertainty of material behavior relied on the loading state. In addition, the correction method of the constitutive parameters and formal errors of the model were established using the Bayesian theorem to acquire additional data. Thereafter, the proposed method was employed for quantifying the uncertainty in the strength of the CMC turbine rotor blade. The determinacy results revealed that the failure risk could be more effectively evaluated by the strength analysis considering the uncertainty of the model, thus providing guidance toward structural design improvement aligned with engineering practice.
- Published
- 2021
23. The PDEM-based time-varying dynamic reliability analysis method for a concrete dam subjected to earthquake
- Author
-
Shuli Fan, Qiang Xu, Jianyun Chen, Pengfei Liu, and Qibin Jia
- Subjects
business.industry ,Seismic loading ,Probability density function ,Building and Construction ,Structural engineering ,Seismic analysis ,Nonlinear system ,Architecture ,Generalized extreme value distribution ,Gravity dam ,Initial value problem ,Safety, Risk, Reliability and Quality ,business ,Randomness ,Geology ,Civil and Structural Engineering - Abstract
A time-varying dynamic reliability analysis method based on the generalized probability density evolution method (GPDEM) is proposed for a concrete gravity dam under seismic loading. Characterized by considering both randomness and time-variability, the method could be applied to predict the time-varying seismic performance of concrete dams from the perspective of probability. Two probability density evolution equations (PDEEs) are established in the method: the extreme value distribution-based PDEE.1 considering the randomness of material parameters under seismic loading and the PDEE.2 considering the deterioration of material properties during the service life. To solve the PDEE.2, a novel initial condition is derived from the PDEE.1 for the first time. The numerical implementation is illustrated by a concrete gravity dam. The results show that the abundant and continuous probability evolution information of the time-varying seismic performance could be captured. Then, the continuous time-varying reliabilities are obtained through the specified thresholds. The accuracy and effectiveness of the newly proposed method are compared with those of the MCS. Finally, the proposed method is verified to be efficient and be suitable for complex nonlinear structures under seismic loading, which could provide a potential tool for the life-cycle seismic design.
- Published
- 2021
24. Signal structure information-based target detection with a fully convolutional network
- Author
-
Hongwei Liu, Xiaojun Peng, Chang Gao, and Junkun Yan
- Subjects
Scheme (programming language) ,Structure (mathematical logic) ,Signal processing ,Information Systems and Management ,Computational complexity theory ,Computer science ,business.industry ,Pattern recognition ,Signal ,Computer Science Applications ,Theoretical Computer Science ,Sampling (signal processing) ,Artificial Intelligence ,Control and Systems Engineering ,Oversampling ,Artificial intelligence ,business ,computer ,Software ,Randomness ,computer.programming_language - Abstract
For target echoes, some structure information can be introduced by the signal processing techniques, such as matched filtering and coherent integration, and are usually omitted in traditional target detection (TTD) methods. The detection performance is supposed to be improved with consideration of these information. To deal with the randomness in the signal structure information (SSI) induced by the sampling and uncertain distribution of scatters, we resort to the data-driven method and propose a novel detection scheme. To make use of the SSI, a fully convolutional network (FCN) is designed to hierarchically learn the SSI. Simulation results show that the better detection performance can be obtained with the proposed SSI-based target detection method comparing to the TTD method. The justifications of using the SSI and the FCN are respectively investigated by considering the oversampling strategy and a post hoc visual explanation technique. Besides, the computational complexity is partially analyzed both in theory and in experiment.
- Published
- 2021
25. Hardware Private Circuits: From Trivial Composition to Full Verification
- Author
-
Itamar Levi, Gaëtan Cassiers, Benjamin Grégoire, François-Xavier Standaert, UCL - SST/ICTM/ELEN - Pôle en ingénierie électrique, Catholic University of Leuven - Katholieke Universiteit Leuven (KU Leuven), Sûreté du logiciel et Preuves Mathématiques Formalisées (STAMP), Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Gaetan Cassiers and Franc¸ois-Xavier Standaert are resp. Research Fellow and Senior Associate Researcher of the Belgian Fund for Scientific Research (FNRS-F.R.S.). This work has been funded in part by the ERC project 724725., and European Project: 724725,SWORD(2017)
- Subjects
[INFO.INFO-AR]Computer Science [cs]/Hardware Architecture [cs.AR] ,Computer science ,Cryptography ,02 engineering and technology ,masking countermeasure ,Theoretical Computer Science ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,Composability ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,composability ,Side channel attack ,Randomness ,Block cipher ,Sside-channel attacks ,Masking countermeasure ,Physical defaults ,Glitch-Based leakages ,Cryptographic engineering ,business.industry ,side-channel attacks ,020206 networking & telecommunications ,glitch-Based leakages ,020202 computer hardware & architecture ,physical defaults ,Computational Theory and Mathematics ,Hardware and Architecture ,Logic gate ,business ,Software ,Computer hardware - Abstract
International audience; The design of glitch-resistant higher-order masking schemes is an important challenge in cryptographic engineering. A recent work by Moos et al. (CHES 2019) showed that most published schemes (and all efficient ones) exhibit local or composability flaws at high security orders, leaving a critical gap in the literature on hardware masking. In this paper, we first extend the simulatability framework of Belaïd et al. (EUROCRYPT 2016) and prove that a compositional strategy that is correct without glitches remains valid with glitches. We then use this extended framework to prove the first masked gadgets that enable trivial composition with glitches at arbitrary orders. We show that the resulting "Hardware Private Circuits" approach the implementation efficiency of previous (flawed) schemes. We finally investigate how trivial composition can serve as a basis for a tool that allows verifying full masked hardware implementations (e.g., of complete block ciphers) at any security order from their HDL code. As side products, we improve the randomness complexity of the best published refreshing gadgets, show that some S-box representations allow latency reductions and confirm practical claims based on implementation results.
- Published
- 2021
26. Cancelable biometric security system based on advanced chaotic maps
- Author
-
Said E. El-Khamy, Noha Ramadan, Ashraf A. M. Khalaf, Fathi E. Abd El-Samie, Hossam Eldin H. Ahmed, Walid El-Shafai, and Hayam A. Abd El-Hameed
- Subjects
Authentication ,Biometrics ,business.industry ,Computer science ,Data_MISCELLANEOUS ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Chaotic ,Word error rate ,Pattern recognition ,Fingerprint recognition ,Encryption ,Computer Graphics and Computer-Aided Design ,Convolution ,Computer Science::Computer Vision and Pattern Recognition ,Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,Software ,Randomness ,Computer Science::Cryptography and Security - Abstract
In recent years, the protection of human biometrics has witnessed an exponential growth. Fingerprint recognition has been utilized for cell phone authentication, biometric passports, and airport security. To improve the fingerprint recognition process, different approaches have been proposed. To keep biometrics away from hacking attempts, non-invertible transformations or encryption algorithms have been proposed to provide cancelable biometric templates for biometric protection. This paper presents a scheme that depends on chaos-based image encryption with different chaotic maps. The chaotic maps are used instead of the simple random number generator to overcome the loss of randomness in the case of a large number of images. To preserve the authentication performance, we should convolve the training images with random kernels to build the encrypted biometric templates. We can obtain different templates from the same biometrics by varying the chaotic map used to generate the convolution kernels. A comparative study is introduced between the used chaotic maps to determine the one, which gives the best performance. The simulation experiments reveal that the enhanced quadratic map 3 achieves the lowest error probability of 3.861% in the cancelable fingerprint recognition system. The cancelable fingerprint recognition system based on this chaotic map achieves the largest probability of detection of 96.139%, with an Equal Error Rate (EER) of 0.593.
- Published
- 2021
27. Risk assessment of coal and gas outburst in driving face based on finite interval cloud model
- Author
-
Zhonghui Li, Enyuan Wang, Guorui Zhang, and Ben Qin
- Subjects
Atmospheric Science ,business.industry ,Computer science ,Coal mining ,Cloud computing ,Interval (mathematics) ,computer.software_genre ,Fuzzy logic ,Identification (information) ,Earth and Planetary Sciences (miscellaneous) ,Fuzzy number ,Data mining ,business ,Risk assessment ,computer ,Randomness ,Water Science and Technology - Abstract
Coal and gas outburst is one of the main disasters that seriously threaten workers’ safety during coal production. Timely identification and evaluation of the potential outburst before tunneling helps to implement the targeted control measures. Nevertheless, the influencing factors of outburst are so complex that there is no suitable index system and evaluation method available yet. In this paper, a more reasonable and complete index system (three categories of factors and 16 indicators) for outburst risk is established, in which, the risk level is divided into three levels. Then, the triangular fuzzy numbers are adopted to quantify the indicators and the logarithmic fuzzy preference programming method to calculate the weight. The cloud distribution under finite interval processing is generated based on cloud numerical characters of each indicator. In addition, the risk level is determined according to the calculation results of multi-index comprehensive membership degree. Finally, the entire evaluation system is applied to two excavated coal roadways for experiments, which show that the finite interval cloud model delivers a more objective and reasonable risk assessment. Five potential outburst threat areas of 58 excavation cycles evaluated show different degrees of outburst dynamic appearance, indicating a good relationship between the evaluation results and the actual risk. This method effectively considers the fuzziness and randomness between indexes, and it is able to classify outburst risk effectively, providing insights for the scientific and accurate assessment of such risks in front of the driving face.
- Published
- 2021
28. Reliability optimization of cutting parameters considering the diameter error of slender shaft
- Author
-
Pengfei Ding, Guodong Yang, Xianzhen Huang, and Yuxiong Li
- Subjects
Reliability optimization ,business.industry ,Mechanical Engineering ,Sequence optimization ,Structural engineering ,Finite element method ,Constraint (information theory) ,Machining ,Mechanics of Materials ,Kriging ,business ,Randomness ,Reliability (statistics) ,Mathematics - Abstract
In slender shaft turning, any diameter error in the workpieces can cause cutting tool wear and poor machining accuracy. Published research ignores the integrated analysis of diameter error, the randomness of parameters, and optimization models. This paper sets material removal rate (MRR) as the optimization objective function and considers the randomness of cutting parameters in a reliability parameter optimization model design, under the constraint of diameter error. The cutting force is calculated based on the unequal shear zone model and is used in finite element analysis of the slender shaft, deriving the diameter error model. The derived complex error model is replaced by the Kriging fitting method, reducing the calculation time to less than 1 %. Single loop sequence optimization and reliability assessment (SORA) is used to optimize reliability. The results show significant improvement of the MRR, while the reliability of each constraint condition is close to 1.
- Published
- 2021
29. Indoor evacuation model based on visual-guidance artificial bee colony algorithm
- Author
-
Aiping Liu, Jiayuan Du, Zhiwei Ye, Xinlu Zong, and Chunzhi Wang
- Subjects
Roulette ,Operations research ,Emergency management ,business.industry ,Computer science ,Process (computing) ,Building and Construction ,Cellular automaton ,Artificial bee colony algorithm ,Social force ,Visual guidance ,business ,Randomness ,Energy (miscellaneous) - Abstract
Research on evacuation simulation and modeling is an important and urgent issue for emergency management. This paper presents an evacuation model based on cellular automata and social force to simulate the evacuation dynamics. Attractive force of target position, repulsive forces of individuals and obstacles, as well as congestion are considered in order to simulate the interaction among evacuees and the changing environment. A visual-guidance-based artificial bee colony algorithm is proposed to optimize the evacuation process. Each evacuee moves toward exits with the guidance of leading bee in his/her visual field. And leading bee is selected according to comprehensive factors including distance from the current individual, the number of obstacles and congestion, which avoids the randomness of roulette mechanism used by basic artificial bee colony algorithm. The experimental results indicate that the proposed model and algorithm can achieve effective performances for indoor evacuation problems with a large number of evacuees and obstacles, which accords with the actual evacuation situation.
- Published
- 2021
30. Impervious to Randomness: Confounding and Selection Biases in Randomized Clinical Trials
- Author
-
Pavlos Msaouel
- Subjects
Selection bias ,Cancer Research ,medicine.medical_specialty ,Randomization ,business.industry ,media_common.quotation_subject ,Confounding ,food and beverages ,General Medicine ,law.invention ,Clinical trial ,Random Allocation ,Oncology ,Randomized controlled trial ,law ,medicine ,Humans ,Observational study ,Intensive care medicine ,business ,Selection Bias ,Randomness ,Selection (genetic algorithm) ,Randomized Controlled Trials as Topic ,media_common - Abstract
The random allocation of therapies in randomized clinical trials is a powerful tool that removes all confounding biases that can affect treatment assignment. However, confounders influencing mediators of the treatment effect are unaffected by randomization and should be considered during trial design and statistical modeling.Examples of such mediators include biomarkers predictive of response to targeted therapies in oncology. Patient selection for such biomarkers is prudent in clinical trials. Conversely, prognostic information on outcome heterogeneity can be derived from observational datasets that include more representative populations. The fusion of experimental and observational data can then allow patient-specific inferences.
- Published
- 2021
31. Stochastic Roadside Unit Location Optimization for Information Propagation in the Internet of Vehicles
- Author
-
Jia Hu, Xin Li, Yunyi Liang, and Ning Ma
- Subjects
Mathematical optimization ,Linear programming ,Computer Networks and Communications ,Stochastic process ,Computer science ,business.industry ,Computation ,Stochastic programming ,Computer Science Applications ,Hardware and Architecture ,Signal Processing ,Genetic algorithm ,The Internet ,business ,Realization (systems) ,Randomness ,Information Systems - Abstract
This study investigates the problem of roadside unit (RSU) location optimization for information propagation under stochastic traffic conditions. The goal of RSU location optimization is to promote multihop information propagation in the Internet of Vehicles which is the promising application of the Internet of Things in transportation. Considering the information propagation time is significantly affected by traffic density and traffic density is endowed with randomness, the problem is formulated as a two-stage mixed-integer nonlinear stochastic programming. The model aims to minimize the sum of the cost associated with RSU investment and the expectation of the penalty cost associated with the network information propagation time exceeding an acceptable threshold. In the first stage of the programming, the number and location of RSUs are determined when network-wide traffic density is not realized. In the second stage, given the RSU location schemes determined in the first stage and the realization of traffic density, the information propagation shortest paths are determined for all origin–destination pairs to minimize network information propagation time. A genetic algorithm (GA) integrated with the solution of a mixed-integer linear programming (GA-MILP) is proposed to solve the model. Numerical results indicate that the advantage of the proposed model in the reduced information propagation time per cost over the deterministic model can be up to 15.54%. Compared with the conventional GA, the GA-MILP has 10.01% higher computation efficiency. This further leads to a 14.73% lower objective value achieved by the GA-MILP when the number of iterations is 50.
- Published
- 2021
32. Review on occupancy detection and prediction in building simulation
- Author
-
Jian Yao, Qiang Zhang, Wanyue Chen, Shuxue Han, Zhe Tian, and Yan Ding
- Subjects
Occupancy ,Computer science ,business.industry ,Thermal comfort ,Building and Construction ,Building design ,Building simulation ,Domain (software engineering) ,Reliability engineering ,Software ,business ,Intelligent control ,Randomness ,Energy (miscellaneous) - Abstract
Energy simulation results for buildings have significantly deviated from actual consumption because of the uncertainty and randomness of occupant behavior. Such differences are mainly caused by the inaccurate estimation of occupancy in buildings. Therefore, the error between reality and prediction could be largely reduced by improving the accuracy level of occupancy prediction. Although various studies on occupancy have been conducted, there are still many differences in the approaches to detection, prediction, and validation. Reports published within this domain are reviewed in this article to discover the advantages and limitations of previous studies, and gaps in the research are identified for future investigation. Six methods of monitoring and their combinations are analyzed to provide effective guidance in choosing and applying a method. The advantages of deterministic schedules, stochastic schedules, and machine-learning methods for occupancy prediction are summarized and discussed to improve prediction accuracy in future work. Moreover, three applications of occupancy models—improving building simulation software, facilitating building operation control, and managing building energy use—are examined. This review provides theoretical guidance for building design and makes contributions to building energy conservation and thermal comfort through the implementation of intelligent control strategies based on occupancy monitoring and prediction.
- Published
- 2021
33. Multi-party watermark embedding with frequency-hopping sequences
- Author
-
Hanzhou Wu and Limengnan Zhou
- Subjects
Cover (telecommunications) ,Computer Networks and Communications ,business.industry ,Computer science ,Applied Mathematics ,Word error rate ,Watermark ,Pattern recognition ,Object (computer science) ,Computational Theory and Mathematics ,Data extraction ,Frequency-hopping spread spectrum ,Embedding ,Artificial intelligence ,business ,Randomness - Abstract
Embedding multiple watermarks into a digital object enables multiple purposes to be realized. In this paper, we present a multi-party watermark embedding framework based on frequency-hopping sequences (FHSs). In the proposed work, a certain number of FHSs are generated in advance and then randomly assigned to multiple users. Each user uses an assigned FHS to embed his own watermark data into the cover object by slightly modifying the content. In this way, the resulting marked object containing multiple watermarks can be put into use. During the phase of watermark verification, each user can extract his own watermark from the marked object with the corresponding FHS without interacting with other users. Since the used FHSs can result in a very low number of element collisions, the probability of altering the same content within the digital object would be low, meaning that, the error rate of data extraction for each user will be low. Moreover, if the digital object was modified, the embedded information can be still retrieved as the FHSs provide high randomness. Experimental results have shown that, our work enables the multiple users to reliably extract their own watermark information for verification even the marked object was maliciously attacked, which verifies the superiority.
- Published
- 2021
34. A Framework for Investigating Rules of Life by Establishing Zones of Influence
- Author
-
Felisa A. Smith, Susanta K. Sarkar, Beth A Reinke, Derek Wright, A. Michelle Lawing, and Michael W. McCoy
- Subjects
business.industry ,Insular biogeography ,DNA Mutational Analysis ,Big data ,Plant Science ,Cognitive reframing ,Space (commercial competition) ,Biological Evolution ,Data science ,NSF Jumpstart ,Phenomenon ,Spatial ecology ,Animals ,Animal Science and Zoology ,Genetic Fitness ,Set (psychology) ,business ,Randomness - Abstract
Synopsis The incredible complexity of biological processes across temporal and spatial scales hampers defining common underlying mechanisms driving the patterns of life. However, recent advances in sequencing, big data analysis, machine learning, and molecular dynamics simulation have renewed the hope and urgency of finding potential hidden rules of life. There currently exists no framework to develop such synoptic investigations. Some efforts aim to identify unifying rules of life across hierarchical levels of time, space, and biological organization, but not all phenomena occur across all the levels of these hierarchies. Instead of identifying the same parameters and rules across levels, we posit that each level of a temporal and spatial scale and each level of biological organization has unique parameters and rules that may or may not predict outcomes in neighboring levels. We define this neighborhood, or the set of levels, across which a rule functions as the zone of influence. Here, we introduce the zone of influence framework and explain using three examples: (a) randomness in biology, where we use a Poisson process to describe processes from protein dynamics to DNA mutations to gene expressions, (b) island biogeography, and (c) animal coloration. The zone of influence framework may enable researchers to identify which levels are worth investigating for a particular phenomenon and reframe the narrative of searching for a unifying rule of life to the investigation of how, when, and where various rules of life operate.
- Published
- 2021
35. 16.8 Tb/s True Random Number Generator Based on Amplified Spontaneous Emission
- Author
-
Liuming Zhang, Weisheng Hu, Guangshuo Cao, Xinran Huang, and Xuelin Yang
- Subjects
Physics ,Optical amplifier ,Amplified spontaneous emission ,business.industry ,Random number generation ,Bandwidth (signal processing) ,Key distribution ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Arrayed waveguide grating ,law.invention ,Optics ,law ,Fiber amplifier ,Electrical and Electronic Engineering ,business ,Randomness - Abstract
We report a 16.8 Tb/s true random number generator (TRNG) based on amplified spontaneous emission (ASE) of Erbium-doped fiber amplifier (EDFA), which is sliced into 42 parallel channels by an arrayed waveguide grating (AWG). The true randomness of ASE is enhanced and guaranteed by a novel extraction approach utilizing two orthogonal polarized components of ASE. Consequently, the random number generation rate (RNGR) can be significantly increased via higher sampling rate. The experimental results show that, the single-channel RNGR of 400 Gb/s is achieved, while the overall RNGR is 16.8 Tb/s for the full ASE spectrum. The proposed TRNG is of great significance in the field of secret key distribution and confidential communication.
- Published
- 2021
36. Health Status Sensing of Catenary Based on Combination Weighting and Normal Cloud Model
- Author
-
Nian Liu, Jian Zhao, You Guo, Ganlin Jiang, Jiangyong Liu, and Lingzhi Yi
- Subjects
Multidisciplinary ,Similarity (geometry) ,Computer science ,business.industry ,Process (computing) ,Cloud computing ,Ideal solution ,Object (computer science) ,computer.software_genre ,Weighting ,Catenary ,Data mining ,business ,computer ,Randomness - Abstract
Catenary plays an important role in electrified railway. It is of great significance to sense the status of catenary. However, the current status sensing methods of catenary select weighting coefficients artificially calculate similarity from only one aspect and only consider the fuzziness of indicators without the randomness. Thus, a status sensing system of catenary with multiple indicators is constructed. The main process of this system is as follows: Firstly, fuzzy analytic hierarchy process method and improved criteria importance though inter-criteria correlation are used to get the subjective and objective weights, and least square method is used to obtain the combined weights, which reduces the influence of artificial experience. Secondly, Grey-Technique for Order Preference by Similarity to an Ideal Solution is used to calculate weighted similarity, so as to obtain the health score of each sensing indicator. Thirdly, the normal cloud model is used to handle the health score to obtain the membership degree of sensing objects, which consider both the fuzziness and randomness. Finally, the principle of maximum membership degree is used to determine health status of each sensing object, and the weighted average principle is used to verify the sensing results. Selecting a Chinese railway catenary as an example for verification, the results show that the system can sense the status of catenary accurately.
- Published
- 2021
37. Assessment of critical buckling load of functionally graded plates using artificial neural network modeling
- Author
-
Hieu Chi Phan, Ashutosh Sutra Dhar, Tu Minh Tran, and Huan Thanh Duong
- Subjects
Coefficient of determination ,Artificial neural network ,Complex differential equation ,business.industry ,Young's modulus ,Structural engineering ,Functionally graded material ,symbols.namesake ,Buckling ,Artificial Intelligence ,symbols ,Deformation (engineering) ,business ,Software ,Randomness ,Mathematics - Abstract
Predicting the critical buckling loads of functionally graded material (FGM) plates using an analytical method requires solving complex equations with various modes of deformation to determine the minimum loads. The approach is too complex for application in engineering practice. In this paper, a data-driven model using the artificial neural network (ANN) is proposed for the critical buckling load of FGM plates, as an alternative tool for practicing engineers. A database is first developed for randomly selected inputs using an analytical solution based on first-order shear deformation theory for simply supported FGM plates. The database is then divided into a training dataset with 80% of the data and a testing dataset with 20% of the data for developing and validating, respectively, the ANN model. The ANN model developed using six hidden layers with 32 nodes in each layer is found to match the data with a coefficient of determination of 99.95%. Using the ANN model, the stochastic characteristic of the critical buckling load is examined with respect to randomness of the input parameters. The study reveals that along with the dimensional parameters, the critical buckling load is significantly affected by the randomness of the volume fraction ratio and ratio of the modulus of elasticity of the ceramic and the metal.
- Published
- 2021
38. Multi-Label Separation-Deviation Surface Model for Detecting Spatial Defects in Topographic Surfaces
- Author
-
Myong K. Jeong, Elsayed A. Elsayed, and Mejdal A. Alqahtani
- Subjects
Surface (mathematics) ,Noise measurement ,Computer science ,business.industry ,020208 electrical & electronic engineering ,Feature extraction ,Pattern recognition ,02 engineering and technology ,Computer Science Applications ,Control and Systems Engineering ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,Spatial ecology ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Representation (mathematics) ,Randomness ,Information Systems - Abstract
Surface topography is a critical quality characteristic of many products and manufacturing processes. Various defects commonly appear on the topography of finished products in spatial patterns after or during manufacturing. These defects are difficult to be identified using traditional monitoring approaches because of the complex structure of topographic data. This article develops a novel and effective approach for monitoring spatial defects in topographic surfaces. The approach improves the representation of surface characteristics through the developed multilabel separation-deviation surface (MSS) model, which labels the important surface characteristics and smooths out the noisy characteristics. We develop two features for monitoring changes in the characteristics of the assigned labels. The MSS feature is introduced for capturing deviations within the assigned labels, and the generalized spatial randomness feature is derived for quantifying deviations between the assigned labels. These two features are integrated into a single monitoring statistic, which is successfully applied for detecting various defects in topographic surfaces, outperforming the traditional monitoring approaches.
- Published
- 2021
39. Monomial evaluation of polynomial functions protected by threshold implementations—with an illustration on AES
- Author
-
Emmanuel Prouff, Simon Landry, and Yanis Linge
- Subjects
Monomial ,Polynomial ,Exponentiation ,Computer Networks and Communications ,Computer science ,business.industry ,Applied Mathematics ,Context (language use) ,Cryptography ,Finite field ,Computational Theory and Mathematics ,Arithmetic ,business ,Randomness ,Block cipher - Abstract
In the context of side-channel countermeasures, threshold implementations (TI) have been introduced in 2006 by Nikova et al. to defeat attacks which exploit hardware effects called glitches. On several aspects, TI may be seen as an extension of another classical side-channel countermeasure, called masking, which is essentially based on the sharing of any internal state of the processing into independent parts (also called shares). To achieve side-channel security, a TI scheme operates on shared data and comes with additional properties to get robustness to glitches. When specifying such a scheme to secure a cryptographic implementation, as e.g. the AES block cipher, the challenging part is to minimise both the number of steps (or cycles) and the consumption of randomness. In this paper, we combine the changing of the guards technique published by Daemen at CHES 2017 (which reduces the need for fresh randomness) with the work of Genelle et al. at CHES 2011 (which combines additive masking and multiplicative one) to propose a new TI which does not consume fresh randomness and which is efficient (in terms of cycles) for classical block ciphers. As an illustration, we develop our proposal for the AES, and more specifically its SBox implemented thanks to a finite field exponentiation. In this particular context, we argue that our proposal is a valuable alternative to the state of the art solutions. More generally, it has the advantage of being easily applicable to the evaluation of any polynomial function, which was usually not the case of previous solutions.
- Published
- 2021
40. Performance assessment of a system for reasoning under uncertainty
- Author
-
Marion Byrne, Branko Ristic, and Christopher Gilliam
- Subjects
Ground truth ,Computer science ,business.industry ,020206 networking & telecommunications ,Context (language use) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Imprecise probability ,Measure (mathematics) ,Rotation formalisms in three dimensions ,Hardware and Architecture ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Focus (optics) ,business ,computer ,Software ,Randomness ,Information Systems ,Possibility theory - Abstract
From the early developments of machines for reasoning and decision making in higher-level information fusion, there was a need for a systematic and reliable evaluation of their performance. Performance evaluation is important for comparison and assessment of alternative solutions to real-world problems. In this paper we focus on one aspect of performance assessment for reasoning under uncertainty: the accuracy of the resulting belief (prediction or estimate). We propose a framework for assessment based on the assumption that the system under investigation is uncertain only due to stochastic variability (randomness), which is partially known. In this context we formulate a distance measure between the “ground truth” and the output of an automated system for reasoning in the framework of one of the non-additive uncertainty formalisms (such as imprecise probability theory, belief function theory or possibility theory). The proposed assessment framework is demonstrated with a simple numerical example.
- Published
- 2021
41. Stumped nature hyperjerk system with fractional order and exponential nonlinearity: Analog simulation, bifurcation analysis and cryptographic applications
- Author
-
Najeeb Alam Khan, Muhammad Ali Qureshi, Tooba Hameed, and Saeed Akbar
- Subjects
business.industry ,Computer science ,Random number generation ,020208 electrical & electronic engineering ,Chaotic ,02 engineering and technology ,Encryption ,Lipschitz continuity ,Topology ,Electronic circuit simulation ,020202 computer hardware & architecture ,Fractional calculus ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,NIST ,Electrical and Electronic Engineering ,business ,Software ,Randomness - Abstract
This article presents the construction and consumption of hyperjerk system having chaotic nature with the commensurate ordered fractional derivative on rapidly digitalized emerging technologies. The system analyzed analytically using Lipschitz condition and numerically with the help of predictor corrector approaches. Originality of constructed system is confirmed using log error plots which gives satisfactory small values on finite time scale. The dynamical performances of the complex system are termed in multiple phase planes by the mean of their significant nature. Stability and bifurcation analysis around the fractional derivative is also studies for the various parameter of system to check the visibility of chaotic solution. The system is also designed in analog circuit simulator with the aid of operational amplifier, anti-parallel semiconductor diodes for validation of the system. With the help of random number generators (RNGs), binary array generated from the hyperjerk system and plugged in to the NIST 800–22 test suite to measure the randomness. The array having high randomness is used as strong cypher key for the cryptographic execution straightforwardly in both direction encryption and decryption.
- Published
- 2021
42. MVGAN: Multi-View Graph Attention Network for Social Event Detection
- Author
-
KouFeifei, DuJunping, XueZhe, WangDawei, and CuiWanqiu
- Subjects
Social network ,Computer science ,business.industry ,media_common.quotation_subject ,Event (relativity) ,02 engineering and technology ,Computer security ,computer.software_genre ,Theoretical Computer Science ,Artificial Intelligence ,020204 information systems ,Attention network ,Social event detection ,0202 electrical engineering, electronic engineering, information engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,business ,Publicity ,computer ,Randomness ,media_common - Abstract
Social networks are critical sources for event detection thanks to the characteristics of publicity and dissemination. Unfortunately, the randomness and semantic sparsity of the social network text bring significant challenges to the event detection task. In addition to text, time is another vital element in reflecting events since events are often followed for a while. Therefore, in this article, we propose a novel method named Multi-View Graph Attention Network (MVGAN) for event detection in social networks. It enriches event semantics through both neighbor aggregation and multi-view fusion in a heterogeneous social event graph. Specifically, we first construct a heterogeneous graph by adding the hashtag to associate the isolated short texts and describe events comprehensively. Then, we learn view-specific representations of events through graph convolutional networks from the perspectives of text semantics and time distribution, respectively. Finally, we design a hashtag-based multi-view graph attention mechanism to capture the intrinsic interaction across different views and integrate the feature representations to discover events. Extensive experiments on public benchmark datasets demonstrate that MVGAN performs favorably against many state-of-the-art social network event detection algorithms. It also proves that more meaningful signals can contribute to improving the event detection effect in social networks, such as published time and hashtags.
- Published
- 2021
43. Testing the randomness of shares in color visual cryptography
- Author
-
Leszek J. Chmielewski, Arkadiusz Orłowski, and Mariusz Nieniewski
- Subjects
010302 applied physics ,Pixel ,Computer science ,Color image ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,01 natural sciences ,Visual cryptography ,Image (mathematics) ,010309 optics ,Artificial Intelligence ,0103 physical sciences ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Dither ,Randomness tests ,business ,Decoding methods ,Randomness - Abstract
The concept of black-and-white visual cryptography with two truly random shares, previously applied to color images, was improved by mixing the contents of the segments of each coding image and by randomly changing a specified number of black pixels into color ones. This was done in such a way that the changes of the contents of the decoded image were as small as possible. These modifications made the numbers of color pixels in the shares close to balanced, which potentially made it possible for the shares to be truly random. The true randomness was understood as that the data pass the suitably designed randomness tests. The randomness of the shares was tested with the NIST randomness tests. Part of the tests passed successfully, while some failed. The target of coding a color image in truly random shares was approached, but not yet reached. In visual cryptography, the decoding with the unarmed human eye is of primary importance, but besides this, simple numerical processing of the decoded image makes it possible to greatly improve the quality of the reconstructed image, so that it becomes close to that of the dithered original image.
- Published
- 2021
44. Passive Fractal Chipless RFID Tags Based on Cellular Automata for Security Applications
- Author
-
Mohammad N. Zaqumi, Mohammed Abdullah Hussaini, Jawad Yousaf, Mohamed Zarouan, and Hatem Rmili
- Subjects
business.industry ,Computer science ,Astronomy and Astrophysics ,Signature (logic) ,Cellular automaton ,law.invention ,Chipless RFID ,Fractal ,Spatial capacity ,law ,Electrical and Electronic Engineering ,Radar ,business ,Computer hardware ,Randomness ,Coding (social sciences) - Abstract
In this paper, we propose a novel design of low-profile fractal chipless tags with unique specific electromagnetic responses. The tags are designed using cellular automata (Game of Life) technique to ensure the randomness of the generated fractal tags. The tags are simulated in CST Microwave Studio for the frequency range of 2 to 10 GHz. The tags are realized on FR4 substrate and their radar cross-section (RCS) characteristics are analyzed for the nine different tags for the three different polarizations (horizontal, vertical, and oblique). Each tag shows a unique signature resonance response. The obtained results of coding capacity (16-20 bits), coding spatial capacity (1-1.25 bits/cm2), coding spectral capacity (2.15-2.9 bits/GHz), and coding density (0.15-0.18 bits/GHz x cm2) of realized tags are very good. The presented tags could be used for the development of secure RFID systems.
- Published
- 2021
45. Optimal VM-to-user mapping in cloud environment based on sustainable strategy space theory
- Author
-
Babak Majidi, Ali Movaghar, Pouria Khanzadi, and Sepideh Adabi
- Subjects
Equilibrium point ,Computer Science::Computer Science and Game Theory ,Mathematical optimization ,Computer Networks and Communications ,Computer science ,business.industry ,Stability (learning theory) ,Cloud computing ,Transformation (function) ,Resource allocation ,Point (geometry) ,business ,Game theory ,Software ,Randomness - Abstract
According to the previous studies in the field of economics-oriented cloud resource allocation using game theory, finding an equilibrium point for price is difficult in many cases. This is due to the stochastic situations of the cloud market. So, to tackle this problem we should find a space to describe behavior of the players’ strategies that be independent from the equilibrium point. Therefore, a new algorithm for VM-to-user mapping in the cloud market called VMUMA is proposed. For designing VMUMA, the cloud market is modeled by game theory. VMUMA is based on Sustainable Strategy Space Theory (SSST) in which, each stability and instability of the player’s strategies in the game are defined and all of the interactions in the game are modeled in a two-dimensional space. Then, it is proved that stability and instability of a strategy are quantifiable if minimum information and uncertainty of the game event are determinable. Furthermore, we find a point in the two-dimensional space in which stability and instability of the strategies are equal. Based on this result, it is proved that a strategy is sustainable if this point has minimum variations. Finally, it is proved that there is a transformation for a square matrix in which the matrix is mapped to summation and subtraction of its elements and vice versa. Based on this transformation, it is shown that area of this space is related to the stability and instability of the players’ strategies. The obtained numerical simulation results show that VMUMA can allocate the resources in the market according to the randomness of stability and instability of the strategies.
- Published
- 2021
46. Hidden multistability in four fractional-order memristive, meminductive and memcapacitive chaotic systems with bursting and boosting phenomena
- Author
-
Manashita Borah and Binoy Krishna Roy
- Subjects
Computer simulation ,business.industry ,Computer science ,Chaotic ,General Physics and Astronomy ,Topology ,01 natural sciences ,010305 fluids & plasmas ,Bursting ,Computer Science::Emerging Technologies ,Secure communication ,0103 physical sciences ,Attractor ,General Materials Science ,Physical and Theoretical Chemistry ,business ,010301 acoustics ,Multistability ,Randomness ,Electronic circuit - Abstract
The operation of memristive, meminductive and memcapacitive circuit components depend on their history of devices, i.e. they are memory dependent. This paper proposes new fractional-order (FO) models of four such circuits since FO derivative is calculated using the past history of time and helps in understanding the memory-dependent dynamics of these memory devices better as compared to the integral derivative. Interestingly, these fractional-order memristive, meminductive and memcapacitive systems (FOMMMSs) display a myriad of dynamics such as ranging from coexisting to hidden attractors, chaotic to hyperchaotic attractors, periodic orbits to stable foci, bursting oscillations transitioning from chaos to periodic states and vice versa, offset boosting phenomenon, and a varied nature of infinite equilibria. The proposed fractional-order systems can have a number of applications such as in oscillator circuits, and secure communication due to the increased randomness of hidden multistability. The theoretical analyses of existence of multistability and hidden attractors in the proposed FOMMMSs comply with that of the numerical simulation and circuit implementation results.
- Published
- 2021
47. Remodeling randomness prioritization to boost-up security of RGB image encryption
- Author
-
Adnan Gutub and Budoor Obid Al-Roithy
- Subjects
Pseudorandom number generator ,Focus (computing) ,Computer Networks and Communications ,business.industry ,Computer science ,Reliability (computer networking) ,020207 software engineering ,Cryptography ,02 engineering and technology ,Encryption ,computer.software_genre ,Transmission (telecommunications) ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,RGB color model ,Data mining ,business ,computer ,Software ,Randomness - Abstract
Securing information became essential to exchange multimedia information safely. The exchanged data need to be transformed in a well-managed, secure, and reliable manner. In this paper, we will focus on securing RGB images via cryptography during transmission among users using our effective proposal of utilizing appropriate Pseudo Random Number Generator (PRNG). We implement many techniques of PRNG involved in two consecutive crypto-processes of substitution and transposition to present secure image transformation. Our technique proposal of PRNGs selection is based on testing to encrypt RGB images to be compared with current related used approaches. The work experimentation aims to identify suitability and reliability through security measures standard parameters. The research justifies its proper PRNG selection to model our approach as attractive effective work worth remarking.
- Published
- 2021
48. A new pseudorandom bits generator based on a 2D-chaotic system and diffusion property
- Author
-
Omar Akif, Rasha Ali, Rasha S. Ali, Rasha Subhi, and Prof.Dr. Alaa Farhan
- Subjects
Pseudorandom number generator ,Sequence ,Control and Optimization ,Computer Networks and Communications ,Computer science ,business.industry ,Autocorrelation ,Chaotic ,Cryptography ,Permutation ,Hardware and Architecture ,Control and Systems Engineering ,Computer Science (miscellaneous) ,Electrical and Electronic Engineering ,business ,Instrumentation ,Algorithm ,Stream cipher ,Randomness ,Information Systems - Abstract
A remarkable correlation between chaotic systems and cryptography has been established with sensitivity to initial states, unpredictability, and complex behaviors. In one development, stages of a chaotic stream cipher are applied to a discrete chaotic dynamic system for the generation of pseudorandom bits. Some of these generators are based on 1D chaotic map and others on 2D ones. In the current study, a pseudorandom bit generator (PRBG) based on a new 2D chaotic logistic map is proposed that runs side-by-side and commences from random independent initial states. The structure of the proposed model consists of the three components of a mouse input device, the proposed 2D chaotic system, and an initial permutation (IP) table. Statistical tests of the generated sequence of bits are investigated by applying five evaluations as well as the ACF and NIST. The results of five standard tests of randomness have been illustrated and overcome a value of 0.160 in frequency test. While the run test presents the pass value t0=4.769 and t1=2.929. Likewise, poker test and serial test the outcomes was passed with 3.520 for poker test, and 4.720 for serial test. Finally, autocorrelation test passed in all shift numbers from 1 to 10.
- Published
- 2021
49. Helix: A Fair Blockchain Consensus Protocol Resistant to Ordering Manipulation
- Author
-
Ido Grayevsky, Maya Leshkowitz, Ronen Tamari, Gad Cohen, Ori Rottenstreich, Avi Asayag, and David Yakira
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Quality of service ,Node (networking) ,Mesh networking ,020206 networking & telecommunications ,02 engineering and technology ,Encryption ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Electrical and Electronic Engineering ,business ,Database transaction ,Randomness ,Block (data storage) ,Computer network - Abstract
We present Helix , a blockchain-based consensus protocol for fair ordering of transactions among nodes in a distributed network. Helix advances in rounds, in each an elected primary node proposes a potential block (a successive set of transactions). For being included in the blockchain, a block must pass validation by an elected committee of nodes. Nodes have two primary preferences. First, to be elected as committee members. Additionally, because each transaction is associated with one of the network nodes, nodes would like to prioritize their own transactions over those of others. Our definition of fairness incorporates three key elements. First, the process of electing nodes to committees is random and unpredictable. Second, a correlated sampling scheme is used to guarantee random selection and ordering of pending transactions in blocks. Third, transactions are encrypted to hide their associations with nodes and prevent censorship. Through the corresponding threshold decryption process we obtain an unpredictable and non-manipulable randomness beacon, which serves both the election process and the correlated sampling scheme. We define a quantitative measure of fairness in the protocol, prove theoretically that fairness manipulation in Helix is significantly limited, and present experiments evaluating fairness in practice.
- Published
- 2021
50. Rotation forest based on multimodal genetic algorithm
- Author
-
Zhe Xu, Yue-hui Ji, and Wei-chen Ni
- Subjects
Ensemble forecasting ,Computer science ,business.industry ,Feature vector ,Metals and Alloys ,General Engineering ,Pattern recognition ,Ensemble learning ,Random forest ,Tree (data structure) ,Genetic algorithm ,Feature (machine learning) ,Artificial intelligence ,business ,Randomness - Abstract
In machine learning, randomness is a crucial factor in the success of ensemble learning, and it can be injected into tree-based ensembles by rotating the feature space. However, it is a common practice to rotate the feature space randomly. Thus, a large number of trees are required to ensure the performance of the ensemble model. This random rotation method is theoretically feasible, but it requires massive computing resources, potentially restricting its applications. A multimodal genetic algorithm based rotation forest (MGARF) algorithm is proposed in this paper to solve this problem. It is a tree-based ensemble learning algorithm for classification, taking advantage of the characteristic of trees to inject randomness by feature rotation. However, this algorithm attempts to select a subset of more diverse and accurate base learners using the multimodal optimization method. The classification accuracy of the proposed MGARF algorithm was evaluated by comparing it with the original random forest and random rotation ensemble methods on 23 UCI classification datasets. Experimental results show that the MGARF method outperforms the other methods, and the number of base learners in MGARF models is much fewer.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.