663 results
Search Results
2. Feature Extraction of Ship-Radiated Noise Based on Intrinsic Time-Scale Decomposition and a Statistical Complexity Measure
- Author
-
Junxiong Wang and Zhe Chen
- Subjects
Computer science ,Ambient noise level ,Feature extraction ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,01 natural sciences ,Sonar ,Article ,Time scale decomposition ,010305 fluids & plasmas ,complexity-spectrum entropy plane ,intrinsic time-scale decomposition ,0103 physical sciences ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,Statistical complexity ,lcsh:Science ,statistical complexity measure ,business.industry ,feature extraction ,020206 networking & telecommunications ,Pattern recognition ,Paper based ,lcsh:QC1-999 ,lcsh:Q ,Artificial intelligence ,business ,lcsh:Physics - Abstract
Extracting effective features from ship-radiated noise is an important way to improve the detection and recognition performance of passive sonar. Complexity features of ship-radiated noise have attracted increasing amounts of attention. However, the traditional definition of complexity based on entropy (information stored in the system) is not accurate. To this end, a new statistical complexity measure is proposed in this paper based on spectrum entropy and disequilibrium. Since the spectrum features are unique to the class of the ship, our method can distinguish different ships according to their location in the two-dimensional plane composed of complexity and spectrum entropy (CSEP). To weaken the influence of ocean ambient noise, the intrinsic time-scale decomposition (ITD) is applied to preprocess the data in this study. The effectiveness of the proposed method is validated through a classification experiment of four types of marine vessels. The recognition rate of the ITD-CSEP methodology achieved 94%, which is much higher than that of traditional feature extraction methods. Moreover, the ITD-CSEP is fast and parameter free. Hence, the method can be applied in the real time processing practical applications.
- Published
- 2019
3. A New Transformation Technique for Reducing Information Entropy: A Case Study on Greyscale Raster Images
- Author
-
Borut Žalik, Damjan Strnad, David Podgorelec, Ivana Kolingerová, Luka Lukač, Niko Lukač, Simon Kolmanič, Krista Rizman Žalik, and Štefan Kohek
- Subjects
computer science ,algorithm ,string transformation ,information entropy ,Hilbert space filling curve ,Science ,Astrophysics ,QB460-466 ,Physics ,QC1-999 - Abstract
This paper proposes a new string transformation technique called Move with Interleaving (MwI). Four possible ways of rearranging 2D raster images into 1D sequences of values are applied, including scan-line, left-right, strip-based, and Hilbert arrangements. Experiments on 32 benchmark greyscale raster images of various resolutions demonstrated that the proposed transformation reduces information entropy to a similar extent as the combination of the Burrows–Wheeler transform followed by the Move-To-Front or the Inversion Frequencies. The proposed transformation MwI yields the best result among all the considered transformations when the Hilbert arrangement is applied.
- Published
- 2023
- Full Text
- View/download PDF
4. Right-Censored Time Series Modeling by Modified Semi-Parametric A-Spline Estimator
- Author
-
Syed Ejaz Ahmed, Ersin Yilmaz, Dursun Aydin, MÜ, Fen Fakültesi, İstatistik Bölümü, Aydın, Dursun, and Yılmaz, Ersin
- Subjects
Statistics::Theory ,Time series ,synthetic data transformation ,Computer science ,Science ,QC1-999 ,Monte Carlo method ,adaptive splines ,semiparametric regression ,General Physics and Astronomy ,B-splines ,right-censored data ,time series ,Astrophysics ,Synthetic data ,Article ,Statistics::Methodology ,Semiparametric regression ,Physics ,Synthetic data transformation ,Estimator ,Right-censored data ,Adaptive splines ,Semiparametric model ,QB460-466 ,Spline (mathematics) ,Benchmark (computing) ,Algorithm - Abstract
This paper focuses on the adaptive spline (A-spline) fitting of the semiparametric regression model to time series data with right-censored observations. Typically, there are two main problems that need to be solved in such a case: dealing with censored data and obtaining a proper A-spline estimator for the components of the semiparametric model. The first problem is traditionally solved by the synthetic data approach based on the Kaplan–Meier estimator. In practice, although the synthetic data technique is one of the most widely used solutions for right-censored observations, the transformed data’s structure is distorted, especially for heavily censored datasets, due to the nature of the approach. In this paper, we introduced a modified semiparametric estimator based on the A-spline approach to overcome data irregularity with minimum information loss and to resolve the second problem described above. In addition, the semiparametric B-spline estimator was used as a benchmark method to gauge the success of the A-spline estimator. To this end, a detailed Monte Carlo simulation study and a real data sample were carried out to evaluate the performance of the proposed estimator and to make a practical comparison.
- Published
- 2021
5. An Improved Approach towards Multi-Agent Pursuit–Evasion Game Decision-Making Using Deep Reinforcement Learning
- Author
-
Yiwei Zhai, Dingwei Wu, Kaifang Wan, Zijian Hu, Xiaoguang Gao, and Bo Li
- Subjects
State variable ,deep reinforcement learning ,Computer science ,Process (engineering) ,business.industry ,MADDPG ,multi-agent ,Science ,Physics ,QC1-999 ,General Physics and Astronomy ,pursuit–evasion ,decision-making ,Astrophysics ,Article ,Domain (software engineering) ,QB460-466 ,Adversarial system ,Reinforcement learning ,Pursuit-evasion ,Artificial intelligence ,business ,adversarial learning - Abstract
A pursuit–evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit–evasion games. A control-oriented framework developed from the DRL-based multi-agent deep deterministic policy gradient (MADDPG) algorithm was built to implement multi-agent cooperative decision-making to overcome the limitation of the tedious state variables required for the traditionally complicated modeling process. To address the effects of errors between a model and a real scenario, this paper introduces adversarial disturbances. It also proposes a novel adversarial attack trick and adversarial learning MADDPG (A2-MADDPG) algorithm. By introducing an adversarial attack trick for the agents themselves, uncertainties of the real world are modeled, thereby optimizing robust training. During the training process, adversarial learning was incorporated into our algorithm to preprocess the actions of multiple agents, which enabled them to properly respond to uncertain dynamic changes in MASs. Experimental results verified that the proposed approach provides superior performance and effectiveness for pursuers and evaders, and both can learn the corresponding confrontational strategy during training.
- Published
- 2021
6. Recent Status and Prospects on Thermochemical Heat Storage Processes and Applications
- Author
-
Kejian Wang, Tadagbe Roger Sylvanus Gbenou, and Armand Fopah-Lele
- Subjects
Computer science ,Process (engineering) ,020209 energy ,Science ,QC1-999 ,Control (management) ,General Physics and Astronomy ,02 engineering and technology ,Review ,Thermal energy storage ,Astrophysics ,Field (computer science) ,reactor ,Whole systems ,thermal simulation ,heat storage application ,0202 electrical engineering, electronic engineering, information engineering ,Thermal simulation ,Physics ,021001 nanoscience & nanotechnology ,Maturity (finance) ,Sizing ,QB460-466 ,Systems engineering ,thermochemical heat storage ,0210 nano-technology - Abstract
Recent contributions to thermochemical heat storage (TCHS) technology have been reviewed and have revealed that there are four main branches whose mastery could significantly contribute to the field. These are the control of the processes to store or release heat, a perfect understanding and designing of the materials used for each storage process, the good sizing of the reactor, and the mastery of the whole system connected to design an efficient system. The above-mentioned fields constitute a very complex area of investigation, and most of the works focus on one of the branches to deepen their research. For this purpose, significant contributions have been and continue to be made. However, the technology is still not mature, and, up to now, no definitive, efficient, autonomous, practical, and commercial TCHS device is available. This paper highlights several issues that impede the maturity of the technology. These are the limited number of research works dedicated to the topic, the simulation results that are too illusory and impossible to implement in real prototypes, the incomplete analysis of the proposed works (simulation works without experimentation or experimentations without prior simulation study), and the endless problem of heat and mass transfer limitation. This paper provides insights and recommendations to better analyze and solve the problems that still challenge the technology.
- Published
- 2021
7. Seeded Ising Model and Distributed Biometric Template Storage and Matching
- Author
-
Hyeong In Choi, Dae-hoon Kim, Nam-Sook Wee, Sung Jin Lee, Song-Hwa Kwon, and Hwan Pyo Moon
- Subjects
Matching (graph theory) ,Biometrics ,Computer science ,Science ,QC1-999 ,General Physics and Astronomy ,biometric template ,02 engineering and technology ,Astrophysics ,Article ,distributed biometrics ,Match rate ,Ising model ,0202 electrical engineering, electronic engineering, information engineering ,Fraction (mathematics) ,business.industry ,Physics ,020206 networking & telecommunications ,Pattern recognition ,Reconstruction method ,QB460-466 ,Template ,Identity management system ,partial template ,020201 artificial intelligence & image processing ,Artificial intelligence ,business - Abstract
It is known that a variant of Ising model, called Seeded Ising Model, can be used to recover the information content of a biometric template from a fraction of information therein. The method consists in reconstructing the whole template, which is called the intruder template in this paper, using only a small portion of the given template, a partial template. This reconstruction method may pose a security threat to the integrity of a biometric identity management system. In this paper, based on the Seeded Ising Model, we present a systematic analysis of the possible security breach and its probability of accepting the intruder templates as genuine. Detailed statistical experiments on the intruder match rate are also conducted under various scenarios. In particular, we study (1) how best a template is divided into several small pieces called partial templates, each of which is to be stored in a separate silo, (2) how to do the matching by comparing partial templates in the locked-up silos, and letting only the results of these intra-silo comparisons be sent to the central tallying server for final scoring without requiring the whole templates in one location at any time.
- Published
- 2021
8. Subjective and Objective Quality Assessments of Display Products
- Author
-
Huiqing Zhang, Nan Guo, Donghao Li, and Yibing Yu
- Subjects
Computer science ,Image quality ,media_common.quotation_subject ,Science ,QC1-999 ,General Physics and Astronomy ,02 engineering and technology ,Astrophysics ,Article ,0202 electrical engineering, electronic engineering, information engineering ,Contrast (vision) ,Quality (business) ,Computer vision ,Product (category theory) ,no-reference ,Reliability (statistics) ,media_common ,business.industry ,display product ,Physics ,020206 networking & telecommunications ,subjective and objective quality assessment ,Objective quality ,QB460-466 ,Quality Score ,020201 artificial intelligence & image processing ,Metric (unit) ,Artificial intelligence ,business - Abstract
In recent years, people’s daily lives have become inseparable from a variety of electronic devices, especially mobile phones, which have undoubtedly become necessity in people’s daily lives. In this paper, we are looking for a reliable way to acquire visual quality of the display product so that we can improve the user’s experience with the display product. This paper proposes two major contributions: the first one is the establishment of a new subjective assessment database (DPQAD) of display products’ screen images. Specifically, we invited 57 inexperienced observers to rate 150 screen images showing the display product. At the same time, in order to improve the reliability of screen display quality score, we combined the single stimulation method with the stimulation comparison method to evaluate the newly created display products’ screen images database effectively. The second one is the development of a new no-reference image quality assessment (IQA) metric. For a given image of the display product, first our method extracts 27 features by analyzing the contrast, sharpness, brightness, etc., and then uses the regression module to obtain the visual quality score. Comprehensive experiments show that our method can evaluate natural scene images and screen content images at the same time. Moreover, compared with ten state-of-the-art IQA methods, our method shows obvious superiority on DPQAD.
- Published
- 2021
9. A Fuzzy Multiple Criteria Decision Making Approach with a Complete User Friendly Computer Implementation
- Author
-
Ludmila Dymova, Krzysztof Kaczmarek, Joanna Kulawik, and Pavel Sevastjanov
- Subjects
Computer science ,Generalization ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Fuzzy logic ,Field (computer science) ,Article ,Task (project management) ,Simple (abstract algebra) ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,implementation ,lcsh:Science ,MCDM ,User Friendly ,hierarchical ,fuzzy ,Management science ,020206 networking & telecommunications ,Multiple-criteria decision analysis ,lcsh:QC1-999 ,Multiple criteria ,020201 artificial intelligence & image processing ,lcsh:Q ,lcsh:Physics - Abstract
The paper presents the generalization of the almost forty years of experience in the field of setting and solving the multiple criteria decision-making (MCDM) problems in various branches of a human activity under different types of uncertainties that inevitably accompany such problems. Based only on the pragmatic intentions, the authors avoid the detailed descriptions of the known methods for the decision-making, while instead focusing on the most frequently used mathematical tools and methodologies in the decision-making practice. Therefore, the paper may be classified as a special kind of illustrative review of the mathematical tools that are focused on applications and are the most used in the solutions of MCDM problems. As an illustrative example, a complete user-friendly computer implementation of such tools and methodology is presented with application to the simple “buying a cat” problem, which, however, possesses all the attributes of the hierarchical fuzzy MCDM task.
- Published
- 2021
10. Computer Vision Based Automatic Recognition of Pointer Instruments: Data Set Optimization and Reading
- Author
-
Peng Huang, Linhai Wu, Lu Wang, Zhiliang Kang, Peng Wang, and Lijia Xu
- Subjects
Computer science ,media_common.quotation_subject ,General Physics and Astronomy ,K-fold cross-validation ,Image processing ,lcsh:Astrophysics ,Article ,Hough transform ,law.invention ,pointer instrumentation ,law ,Reading (process) ,lcsh:QB460-466 ,Computer vision ,lcsh:Science ,media_common ,business.industry ,object detection ,Object detection ,lcsh:QC1-999 ,image processing ,Data set ,Pointer (computer programming) ,Robot ,lcsh:Q ,Artificial intelligence ,Faster-RCNN ,business ,Automatic meter reading ,lcsh:Physics - Abstract
With the promotion of intelligent substations, more and more robots have been used in industrial sites. However, most of the meter reading methods are interfered with by the complex background environment, which makes it difficult to extract the meter area and pointer centerline, which is difficult to meet the actual needs of the substation. To solve the current problems of pointer meter reading for industrial use, this paper studies the automatic reading method of pointer instruments by putting forward the Faster Region-based Convolutional Network (Faster-RCNN) based object detection integrating with traditional computer vision. Firstly, the Faster-RCNN is used to detect the target instrument panel region. At the same time, the Poisson fusion method is proposed to expand the data set. The K-fold verification algorithm is used to optimize the quality of the data set, which solves the lack of quantity and low quality of the data set, and the accuracy of target detection is improved. Then, through some image processing methods, the image is preprocessed. Finally, the position of the centerline of the pointer is detected by the Hough transform, and the reading can be obtained. The evaluation of the algorithm performance shows that the method proposed in this paper is suitable for automatic reading of pointer meters in the substation environment, and provides a feasible idea for the target detection and reading of pointer meters.
- Published
- 2021
11. Health Monitoring of Air Compressors Using Reconstruction-Based Deep Learning for Anomaly Detection with Increased Transparency
- Author
-
Muhammad Umair Hassan, Magnus Gribbestad, Kelvin Sundli, and Ibrahim A. Hameed
- Subjects
explainable deep learning ,Computer science ,020209 energy ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,computer.software_genre ,Fault (power engineering) ,Predictive maintenance ,Article ,predictive maintenance ,Black box ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,business.industry ,Deep learning ,Anomaly (natural sciences) ,prognostics and health management (PHM) ,anomaly detection ,lcsh:QC1-999 ,Data point ,Prognostics ,020201 artificial intelligence & image processing ,Anomaly detection ,lcsh:Q ,Artificial intelligence ,Data mining ,business ,computer ,lcsh:Physics - Abstract
Anomaly detection refers to detecting data points, events, or behaviour that do not comply with expected or normal behaviour. For example, a typical problem related to anomaly detection on an industrial level is having little labelled data and a few run-to-failure examples, making it challenging to develop reliable and accurate prognostics and health management systems for fault detection and identification. Certain machine learning approaches for anomaly detection require normal data to train, which reduces the need for historical data with fault labels, where the main task is to differentiate between normal and anomalous behaviour. Several reconstruction-based deep learning approaches are explored in this work and compared towards detecting anomalies in air compressors. Anomalies in such systems are not point-anomalies, but instead, an increasing deviation from the normal condition as the system components start to degrade. In this paper, a descriptive range of the deviation based on the reconstruction-based techniques is proposed. Most anomaly detection approaches are considered black box models, predicting whether an event should be considered an anomaly or not. This paper proposes a method for increasing the transparency and explainability of reconstruction-based anomaly detection to indicate which parts of a system contribute to the deviation from expected behaviour. The results show that the proposed methods detect abnormal behaviour in air compressors accurately and reliably and indicate why it deviates. The proposed approach is capable of detecting faults without the need for historical examples of similar faults. The proposed method for explainable anomaly detection is crucial to any prognostics and health management (PHM) system due to its purpose of detecting deviations and identifying causes. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
- Published
- 2021
12. Fractional Dynamics Identification via Intelligent Unpacking of the Sample Autocovariance Function by Neural Networks
- Author
-
Grzegorz Sikora, Agnieszka Wyłomańska, Ireneusz Jablonski, Michał Balcerek, and Dawid Szarek
- Subjects
Anomalous diffusion ,Computer science ,neural network ,fractional Brownian motion ,General Physics and Astronomy ,lcsh:Astrophysics ,01 natural sciences ,Article ,010305 fluids & plasmas ,Monte Carlo simulations ,symbols.namesake ,Robustness (computer science) ,anomalous diffusion ,0103 physical sciences ,lcsh:QB460-466 ,010306 general physics ,lcsh:Science ,Gaussian process ,Fractional Brownian motion ,Artificial neural network ,estimation ,Function (mathematics) ,autocovariance function ,lcsh:QC1-999 ,Fractional dynamics ,Autocovariance ,symbols ,lcsh:Q ,Algorithm ,lcsh:Physics - Abstract
Many single-particle tracking data related to the motion in crowded environments exhibit anomalous diffusion behavior. This phenomenon can be described by different theoretical models. In this paper, fractional Brownian motion (FBM) was examined as the exemplary Gaussian process with fractional dynamics. The autocovariance function (ACVF) is a function that determines completely the Gaussian process. In the case of experimental data with anomalous dynamics, the main problem is first to recognize the type of anomaly and then to reconstruct properly the physical rules governing such a phenomenon. The challenge is to identify the process from short trajectory inputs. Various approaches to address this problem can be found in the literature, e.g., theoretical properties of the sample ACVF for a given process. This method is effective, however, it does not utilize all of the information contained in the sample ACVF for a given trajectory, i.e., only values of statistics for selected lags are used for identification. An evolution of this approach is proposed in this paper, where the process is determined based on the knowledge extracted from the ACVF. The designed method is intuitive and it uses information directly available in a new fashion. Moreover, the knowledge retrieval from the sample ACVF vector is enhanced with a learning-based scheme operating on the most informative subset of available lags, which is proven to be an effective encoder of the properties inherited in complex data. Finally, the robustness of the proposed algorithm for FBM is demonstrated with the use of Monte Carlo simulations.
- Published
- 2020
13. Optimization of Selective Assembly for Shafts and Holes Based on Relative Entropy and Dynamic Programming
- Author
-
Mingyi Xing, Qiushuang Zhang, Xin Jin, and Zhijing Zhang
- Subjects
Surface (mathematics) ,0209 industrial biotechnology ,Kullback–Leibler divergence ,Computer science ,General Physics and Astronomy ,Mechanical engineering ,lcsh:Astrophysics ,02 engineering and technology ,01 natural sciences ,Article ,010305 fluids & plasmas ,020901 industrial engineering & automation ,0103 physical sciences ,lcsh:QB460-466 ,lcsh:Science ,ComputingMethodologies_COMPUTERGRAPHICS ,precision instrument ,selective assembly ,dynamic programming ,relative entropy ,lcsh:QC1-999 ,Dynamic programming ,Geometric error ,lcsh:Q ,optimization ,lcsh:Physics - Abstract
Selective assembly is the method of obtaining high precision assemblies from relatively low precision components. For precision instruments, the geometric error on mating surface is an important factor affecting assembly accuracy. Different from the traditional selective assembly method, this paper proposes an optimization method of selective assembly for shafts and holes based on relative entropy and dynamic programming. In this method, relative entropy is applied to evaluate the clearance uniformity between shafts and holes, and dynamic programming is used to optimize selective assembly of batches of shafts and holes. In this paper, the case studied has 8 shafts and 20 holes, which need to be assembled into 8 products. The results show that optimal combinations are selected, which provide new insights into selective assembly optimization and lay the foundation for selective assembly of multi-batch precision parts.
- Published
- 2020
14. Contextuality Analysis of Impossible Figures
- Author
-
Ehtibar N. Dzhafarov and Víctor H. Cervantes
- Subjects
Theoretical computer science ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,deterministic systems ,01 natural sciences ,Measure (mathematics) ,050105 experimental psychology ,Escher ,Article ,measures of contextuality ,Frequentist inference ,0103 physical sciences ,lcsh:QB460-466 ,FOS: Mathematics ,0501 psychology and cognitive sciences ,contextuality ,Impossible object ,010306 general physics ,lcsh:Science ,computer.programming_language ,Probability (math.PR) ,05 social sciences ,Probabilistic logic ,impossible figures ,Object (philosophy) ,lcsh:QC1-999 ,Kochen–Specker theorem ,81P13, 81Q99, 60A99, 81P13, 81Q99, 60A99, 81P13, 81Q99, 60A99 ,FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,epistemic probabilities ,Neurons and Cognition (q-bio.NC) ,lcsh:Q ,Random variable ,computer ,Mathematics - Probability ,lcsh:Physics - Abstract
This paper has two purposes. One is to demonstrate contextuality analysis of systems of epistemic random variables. The other is to evaluate the performance of a new, hierarchical version of the measure of (non)contextuality introduced in earlier publications. As objects of analysis we use impossible figures of the kind created by the Penroses and Escher. We make no assumptions as to how an impossible figure is perceived, taking it instead as a fixed physical object allowing one of several deterministic descriptions. Systems of epistemic random variables are obtained by probabilistically mixing these deterministic systems. This probabilistic mixture reflects our uncertainty or lack of knowledge rather than random variability in the frequentist sense., Entropy 2020, 22(9), 981; format of the published paper differs from this preprint
- Published
- 2020
15. A New Multi-Attribute Emergency Decision-Making Algorithm Based on Intuitionistic Fuzzy Cross-Entropy and Comprehensive Grey Correlation Analysis
- Author
-
Ping Li, Ying Ji, Shaojian Qu, and Zhong Wu
- Subjects
grey correlation analysis ,0209 industrial biotechnology ,Computer science ,General Physics and Astronomy ,Value (computer science) ,Grey correlation analysis ,lcsh:Astrophysics ,02 engineering and technology ,intuitionistic fuzzy cross-entropy ,Article ,020901 industrial engineering & automation ,attribute weights ,multi-attribute emergency decision-making ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Effective method ,Sensitivity (control systems) ,lcsh:Science ,Preference (economics) ,earthquake shelters ,Resolution (logic) ,lcsh:QC1-999 ,Cross entropy ,Ranking ,020201 artificial intelligence & image processing ,lcsh:Q ,Algorithm ,lcsh:Physics - Abstract
Intuitionistic fuzzy distance measurement is an effective method to study multi-attribute emergency decision-making (MAEDM) problems. Unfortunately, the traditional intuitionistic fuzzy distance measurement method cannot accurately reflect the difference between membership and non-membership data, where it is easy to cause information confusion. Therefore, from the intuitionistic fuzzy number (IFN), this paper constructs a decision-making model based on intuitionistic fuzzy cross-entropy and a comprehensive grey correlation analysis algorithm. For the MAEDM problems of completely unknown and partially known attribute weights, this method establishes a grey correlation analysis algorithm based on the objective evaluation value and subjective preference value of decision makers (DMs), which makes up for the shortcomings of traditional model information loss and greatly improves the accuracy of MAEDM. Finally, taking the Wenchuan Earthquake on May 12th 2008 as a case study, this paper constructs and solves the ranking problem of shelters. Through the sensitivity comparison analysis, when the grey resolution coefficient increases from 0.4 to 1.0, the ranking result of building shelters remains stable. Compared to the traditional intuitionistic fuzzy distance, this method is shown to be more reliable.
- Published
- 2020
16. A Note on Complexities by Means of Quantum Compound Systems
- Author
-
Noboru Watanabe
- Subjects
Computer science ,Orthographic projection ,General Physics and Astronomy ,lcsh:Astrophysics ,State (functional analysis) ,Von Neumann entropy ,Type (model theory) ,01 natural sciences ,Article ,lcsh:QC1-999 ,010305 fluids & plasmas ,quantum compound system ,Separable state ,Joint probability distribution ,quantum information ,0103 physical sciences ,lcsh:QB460-466 ,lcsh:Q ,Statistical physics ,Quantum information ,010306 general physics ,lcsh:Science ,Quantum ,quantum entropy ,lcsh:Physics - Abstract
It has been shown that joint probability distributions of quantum systems generally do not exist, and the key to solving this concern is the compound state invented by Ohya. The Ohya compound state constructed by the Schatten decomposition (i.e., one-dimensional orthogonal projection) of the input state shows the correlation between the states of the input and output systems. In 1983, Ohya formulated the quantum mutual entropy by applying this compound state. Since this mutual entropy satisfies the fundamental inequality, one may say that it represents the amount of information correctly transmitted from the input system through the channel to the output system, and it may play an important role in discussing the efficiency of information transfer in quantum systems. Since the Ohya compound state is separable state, it is important that we must look more carefully into the entangled compound state. This paper is intended as an investigation of the construction of the entangled compound state, and the hybrid entangled compound state is introduced. The purpose of this paper is to consider the validity of the compound states constructing the quantum mutual entropy type complexity. It seems reasonable to suppose that the quantum mutual entropy type complexity defined by using the entangled compound state is not useful to discuss the efficiency of information transmission from the initial system to the final system.
- Published
- 2020
17. An Image Encryption Algorithm Using Logistic Map with Plaintext-Related Parameter Values
- Author
-
Jakub Oravec, Lubos Ovsenik, and Jan Papaj
- Subjects
plaintext-related ,Computer science ,Science ,QC1-999 ,Chaotic ,General Physics and Astronomy ,Interval (mathematics) ,Lyapunov exponent ,Fixed point ,Astrophysics ,Encryption ,Article ,logistic map ,Set (abstract data type) ,symbols.namesake ,business.industry ,Physics ,Plaintext ,image encryption ,QB460-466 ,symbols ,Logistic map ,business ,Algorithm ,chaotic map - Abstract
This paper deals with a plaintext-related image encryption algorithm that modifies the parameter values used by the logistic map according to plain image pixel intensities. The parameter values are altered in a row-wise manner, which enables the usage of the same procedure also during the decryption. Furthermore, the parameter modification technique takes into account knowledge about the logistic map, its fixed points and possible periodic cycles. Since the resulting interval of parameter values achieves high positive values of Lyapunov exponents, the chaotic behavior of the logistic map should be most pronounced. These assumptions are verified by a set of experiments and the obtained numerical values are compared with those reported in relevant papers. It is found that the proposed design that uses a simpler, but well-studied, chaotic map with mitigated issues obtains results comparable with algorithms that use more complex chaotic systems. Moreover, the proposed solution is much faster than other approaches with a similar purpose.
- Published
- 2021
18. Improved Deep Q-Network for User-Side Battery Energy Storage Charging and Discharging Strategy in Industrial Parks
- Author
-
Wendong Xiao, Chengpeng Jiang, Jinglin Li, Jinwei Xiang, and Shuai Chen
- Subjects
Battery (electricity) ,Artificial neural network ,Computer science ,Science ,Physics ,QC1-999 ,General Physics and Astronomy ,Energy consumption ,Astrophysics ,Lean manufacturing ,Article ,Automotive engineering ,Power (physics) ,QB460-466 ,Cost reduction ,battery energy storage ,deep Q-network ,charging and discharging strategies ,Reinforcement learning ,industrial parks ,Energy (signal processing) - Abstract
Battery energy storage technology is an important part of the industrial parks to ensure the stable power supply, and its rough charging and discharging mode is difficult to meet the application requirements of energy saving, emission reduction, cost reduction, and efficiency increase. As a classic method of deep reinforcement learning, the deep Q-network is widely used to solve the problem of user-side battery energy storage charging and discharging. In some scenarios, its performance has reached the level of human expert. However, the updating of storage priority in experience memory often lags behind updating of Q-network parameters. In response to the need for lean management of battery charging and discharging, this paper proposes an improved deep Q-network to update the priority of sequence samples and the training performance of deep neural network, which reduces the cost of charging and discharging action and energy consumption in the park. The proposed method considers factors such as real-time electricity price, battery status, and time. The energy consumption state, charging and discharging behavior, reward function, and neural network structure are designed to meet the flexible scheduling of charging and discharging strategies, and can finally realize the optimization of battery energy storage benefits. The proposed method can solve the problem of priority update lag, and improve the utilization efficiency and learning performance of the experience pool samples. The paper selects electricity price data from the United States and some regions of China for simulation experiments. Experimental results show that compared with the traditional algorithm, the proposed approach can achieve better performance in both electricity price systems, thereby greatly reducing the cost of battery energy storage and providing a stronger guarantee for the safe and stable operation of battery energy storage systems in industrial parks.
- Published
- 2021
19. Robust Controller Design for Multi-Input Multi-Output Systems Using Coefficient Diagram Method
- Author
-
Chonghui Wang, Fanwei Meng, Shengya Meng, and Kai Liu
- Subjects
Optimization problem ,Computer science ,Noise (signal processing) ,Science ,Physics ,QC1-999 ,MIMO ,PSO ,CDM ,General Physics and Astronomy ,Particle swarm optimization ,Decoupling (cosmology) ,Astrophysics ,Article ,measurement noise ,QB460-466 ,Coupling (computer programming) ,Control theory ,Coefficient diagram method ,coupling ,robust controller ,Computer Science::Information Theory - Abstract
The coupling between variables in the multi-input multi-output (MIMO) systems brings difficulties to the design of the controller. Aiming at this problem, this paper combines the particle swarm optimization (PSO) with the coefficient diagram method (CDM) and proposes a robust controller design strategy for the MIMO systems. The decoupling problem is transformed into a compensator parameter optimization problem, and PSO optimizes the compensator parameters to reduce the coupling effect in the MIMO systems. For the MIMO system with measurement noise, the effectiveness of CDM in processing measurement noise is analyzed. This paper gives the control design steps of the MIMO systems. Finally, simulation experiments of four typical MIMO systems demonstrate the effectiveness of the proposed method.
- Published
- 2021
20. A Bit Shift Image Encryption Algorithm Based on Double Chaotic Systems
- Author
-
Yue Zhao and Lingfeng Liu
- Subjects
Computer science ,business.industry ,Science ,Physics ,QC1-999 ,Key space ,chaotic system ,Chaotic ,General Physics and Astronomy ,image encryption ,Astrophysics ,Encryption ,Article ,Scrambling ,Image (mathematics) ,QB460-466 ,Nonlinear system ,bit shift ,Computer Science::Multimedia ,Key (cryptography) ,Deterministic system (philosophy) ,business ,Algorithm ,Computer Science::Cryptography and Security - Abstract
A chaotic system refers to a deterministic system with seemingly random irregular motion, and its behavior is uncertain, unrepeatable, and unpredictable. In recent years, researchers have proposed various image encryption schemes based on a single low-dimensional or high-dimensional chaotic system, but many algorithms have problems such as low security. Therefore, designing a good chaotic system and encryption scheme is very important for encryption algorithms. This paper constructs a new double chaotic system based on tent mapping and logistic mapping. In order to verify the practicability and feasibility of the new chaotic system, a displacement image encryption algorithm based on the new chaotic system was subsequently proposed. This paper proposes a displacement image encryption algorithm based on the new chaotic system. The algorithm uses an improved new nonlinear feedback function to generate two random sequences, one of which is used to generate the index sequence, the other is used to generate the encryption matrix, and the index sequence is used to control the generation of the encryption matrix required for encryption. Then, the encryption matrix and the scrambling matrix are XORed to obtain the first encryption image. Finally, a bit-shift encryption method is adopted to prevent the harm caused by key leakage and to improve the security of the algorithm. Numerical experiments show that the key space of the algorithm is not only large, but also the key sensitivity is relatively high, and it has good resistance to various attacks. The analysis shows that this algorithm has certain competitive advantages compared with other encryption algorithms.
- Published
- 2021
21. A Novel Framework for Anomaly Detection for Satellite Momentum Wheel Based on Optimized SVM and Huffman-Multi-Scale Entropy
- Author
-
Pengpeng Liu, Mingjia Lei, Rixin Wang, Yuqing Li, and Minqiang Xu
- Subjects
Computer science ,Science ,QC1-999 ,General Physics and Astronomy ,Astrophysics ,computer.software_genre ,Huffman coding ,Reaction wheel ,Article ,Constant false alarm rate ,symbols.namesake ,Entropy (information theory) ,Physics ,Huffman-multi-scale entropy (HMSE) ,Particle swarm optimization ,Directed acyclic graph ,anomaly detection ,QB460-466 ,Support vector machine ,symbols ,support vector machine (SVM) ,Anomaly detection ,Data mining ,satellite momentum wheel ,computer ,adaptive particle swarm optimization (APSO) - Abstract
The health status of the momentum wheel is vital for a satellite. Recently, research on anomaly detection for satellites has become more and more extensive. Previous research mostly required simulation models for key components. However, the physical models are difficult to construct, and the simulation data does not match the telemetry data in engineering applications. To overcome the above problem, this paper proposes a new anomaly detection framework based on real telemetry data. First, the time-domain and frequency-domain features of the preprocessed telemetry signal are calculated, and the effective features are selected through evaluation. Second, a new Huffman-multi-scale entropy (HMSE) system is proposed, which can effectively improve the discrimination between different data types. Third, this paper adopts a multi-class SVM model based on the directed acyclic graph (DAG) principle and proposes an improved adaptive particle swarm optimization (APSO) method to train the SVM model. The proposed method is applied to anomaly detection for satellite momentum wheel voltage telemetry data. The recognition accuracy and detection rate of the method proposed in this paper can reach 99.60% and 99.87%. Compared with other methods, the proposed method can effectively improve the recognition accuracy and detection rate, and it can also effectively reduce the false alarm rate and the missed alarm rate.
- Published
- 2021
22. A Model for Tacit Communication in Collaborative Human-UAV Search-and-Rescue
- Author
-
Vijeth Hebbar and Cedric Langbort
- Subjects
game theory ,business.industry ,Computer science ,Implicit communication ,Science ,Physics ,QC1-999 ,Rendezvous ,General Physics and Astronomy ,Topology (electrical circuits) ,Astrophysics ,Article ,Human–robot interaction ,QB460-466 ,Stackelberg competition ,human robot interaction ,Artificial intelligence ,Signaling game ,signaling ,business ,Game theory ,Search and rescue - Abstract
Tacit communication can be exploited in human robot interaction (HRI) scenarios to achieve desirable outcomes. This paper models a particular search and rescue (SAR) scenario as a modified asymmetric rendezvous game, where limited signaling capabilities are present between the two players—rescuer and rescuee. We model our situation as a co-operative Stackelberg signaling game, where the rescuer acts as a leader in signaling its intent to the rescuee. We present an efficient game-theoretic approach to obtain the optimal signaling policy to be employed by the rescuer. We then robustify this approach to uncertainties in the rescue topology and deviations in rescuee behavior. The paper thus introduces a game-theoretic framework to model an HRI scenario with implicit communication capacity.
- Published
- 2021
23. Status Set Sequential Pattern Mining Considering Time Windows and Periodic Analysis of Patterns
- Author
-
Shenghan Zhou, Xinpeng Ji, Yue Zhang, Wenkui Hou, Wenbing Chang, Houxiang Liu, Yiyong Xiao, and Bang Chen
- Subjects
Apriori algorithm ,Computer science ,Science ,QC1-999 ,General Physics and Astronomy ,02 engineering and technology ,Astrophysics ,computer.software_genre ,Article ,Set (abstract data type) ,Time windows ,Factor (programming language) ,TW-Apriori algorithm ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,computer.programming_language ,Physics ,Contrast (statistics) ,020206 networking & telecommunications ,data mining ,QB460-466 ,Status set ,Local time ,time window ,020201 artificial intelligence & image processing ,Data mining ,periodicity analysis ,computer ,status set sequential pattern mining - Abstract
The traditional sequential pattern mining method is carried out considering the whole time period and often ignores the sequential patterns that only occur in local time windows, as well as possible periodicity. Therefore, in order to overcome the limitations of traditional methods, this paper proposes status set sequential pattern mining with time windows (SSPMTW). In contrast to traditional methods, the item status is considered, and time windows, minimum confidence, minimum coverage, minimum factor set ratios and other constraints are added to mine more valuable rules in local time windows. The periodicity of these rules is also analyzed. According to the proposed method, this paper improves the Apriori algorithm, proposes the TW-Apriori algorithm, and explains the basic idea of the algorithm. Then, the feasibility, validity and efficiency of the proposed method and algorithm are verified by small-scale and large-scale examples. In a large-scale numerical example solution, the influence of various constraints on the mining results is analyzed. Finally, the solution results of SSPM and SSPMTW are compared and analyzed, and it is suggested that SSPMTW can excavate the laws existing in local time windows and analyze the periodicity of the laws, which solves the problem of SSPM ignoring the laws existing in local time windows and overcomes the limitations of traditional sequential pattern mining algorithms. In addition, the rules mined by SSPMTW reduce the entropy of the system.
- Published
- 2021
24. A New Method Based on Time-Varying Filtering Intrinsic Time-Scale Decomposition and General Refined Composite Multiscale Sample Entropy for Rolling-Bearing Feature Extraction
- Author
-
Liwei Zhan, Jianpeng Ma, Song Han, Guang-Zhu Zhang, and Chengwei Li
- Subjects
coyote optimization algorithm ,signal denoising ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Fault (power engineering) ,Article ,intrinsic time-scale decomposition ,Robustness (computer science) ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Decomposition (computer science) ,generalized refined composite multiscale sample entropy ,Entropy (energy dispersal) ,lcsh:Science ,Noise (signal processing) ,020208 electrical & electronic engineering ,fault diagnosis ,lcsh:QC1-999 ,Sample entropy ,rolling bearing ,lcsh:Q ,020201 artificial intelligence & image processing ,Decomposition method (constraint satisfaction) ,Algorithm ,lcsh:Physics ,Energy (signal processing) - Abstract
The early fault diagnosis of rolling bearings has always been a difficult problem due to the interference of strong noise. This paper proposes a new method of early fault diagnosis for rolling bearings with entropy participation. First, a new signal decomposition method is proposed in this paper: intrinsic time-scale decomposition based on time-varying filtering. It is introduced into the framework of complete ensemble intrinsic time-scale decomposition with adaptive noise (CEITDAN). Compared with traditional intrinsic time-scale decomposition, intrinsic time-scale decomposition based on time-varying filtering can improve frequency-separation performance. It has strong robustness in the presence of noise interference. However, decomposition parameters (the bandwidth threshold and B-spline order) have significant impacts on the decomposition results of this method, and they need to be artificially set. Aiming to address this problem, this paper proposes rolling-bearing fault diagnosis optimization based on an improved coyote optimization algorithm (COA). First, the minimal generalized refined composite multiscale sample entropy parameter was used as the objective function. Through the improved COA algorithm, optimal intrinsic time-scale decomposition parameters based on time-varying filtering that match the input signal are obtained. By analyzing generalized refined composite multiscale sample entropy (GRCMSE), whether the mode component is dominated by the fault signal is determined. The signal is reconstructed and decomposed again. Finally, the mode component with the highest energy in the central frequency band is selected for envelope spectrum variation for fault diagnosis. Lastly, simulated and experimental signals were used to verify the effectiveness of the proposed method.
- Published
- 2021
25. An Improved Encoder-Decoder Network Based on Strip Pool Method Applied to Segmentation of Farmland Vacancy Field
- Author
-
Yuhang Yang, Weiwei Cai, Yilang Qin, Xin Ning, Xixin Zhang, and Zhiyong Li
- Subjects
farmland vacancy segmentation ,Computer science ,Pooling ,General Physics and Astronomy ,Word error rate ,lcsh:Astrophysics ,02 engineering and technology ,strip pooling ,Article ,encoder–decoder ,Field (computer science) ,Robustness (computer science) ,Vacancy defect ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,lcsh:Science ,business.industry ,020206 networking & telecommunications ,Pattern recognition ,crop growth assessment ,semantic segmentation ,lcsh:QC1-999 ,Core (game theory) ,Test set ,lcsh:Q ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,lcsh:Physics - Abstract
In the research of green vegetation coverage in the field of remote sensing image segmentation, crop planting area is often obtained by semantic segmentation of images taken from high altitude. This method can be used to obtain the rate of cultivated land in a region (such as a country), but it does not reflect the real situation of a particular farmland. Therefore, this paper takes low-altitude images of farmland to build a dataset. After comparing several mainstream semantic segmentation algorithms, a new method that is more suitable for farmland vacancy segmentation is proposed. Additionally, the Strip Pooling module (SPM) and the Mixed Pooling module (MPM), with strip pooling as their core, are designed and fused into the semantic segmentation network structure to better extract the vacancy features. Considering the high cost of manual data annotation, this paper uses an improved ResNet network as the backbone of signal transmission, and meanwhile uses data augmentation to improve the performance and robustness of the model. As a result, the accuracy of the proposed method in the test set is 95.6%, mIoU is 77.6%, and the error rate is 7%. Compared to the existing model, the mIoU value is improved by nearly 4%, reaching the level of practical application.
- Published
- 2021
26. Optimization of Big Data Scheduling in Social Networks
- Author
-
Weina Fu, Shuai Liu, and Gautam Srivastava
- Subjects
Mathematical optimization ,Information transfer ,social networks ,Computer science ,information security ,Big data ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Database design ,Article ,big data ,task volume ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,information transfer ,Entropy (information theory) ,database design ,scheduling ,lcsh:Science ,Computer Science::Operating Systems ,Social network ,business.industry ,020206 networking & telecommunications ,Information security ,Collision ,lcsh:QC1-999 ,classification ,020201 artificial intelligence & image processing ,lcsh:Q ,business ,entropy ,optimization ,lcsh:Physics ,Data transmission - Abstract
In social network big data scheduling, it is easy for target data to conflict in the same data node. Of the different kinds of entropy measures, this paper focuses on the optimization of target entropy. Therefore, this paper presents an optimized method for the scheduling of big data in social networks and also takes into account each task&rsquo, s amount of data communication during target data transmission to construct a big data scheduling model. Firstly, the task scheduling model is constructed to solve the problem of conflicting target data in the same data node. Next, the necessary conditions for the scheduling of tasks are analyzed. Then, the a periodic task distribution function is calculated. Finally, tasks are scheduled based on the minimum product of the corresponding resource level and the minimum execution time of each task is calculated. Experimental results show that our optimized scheduling model quickly optimizes the scheduling of social network data and solves the problem of strong data collision.
- Published
- 2019
27. Towards Quantum-Secured Permissioned Blockchain: Signature, Consensus, and Logic
- Author
-
Qauanlong Wang, Xin Sun, Mirek Sopek, and Piotr Kulicki
- Subjects
Scheme (programming language) ,blockchain ,Blockchain ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Quantum key distribution ,Computer security ,computer.software_genre ,01 natural sciences ,Article ,quantum computing ,Digital signature ,lottery ,0103 physical sciences ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,010306 general physics ,lcsh:Science ,Protocol (object-oriented programming) ,Quantum computer ,computer.programming_language ,Signature (logic) ,lcsh:QC1-999 ,consensus ,digital signature ,Scalability ,020201 artificial intelligence & image processing ,lcsh:Q ,computer ,lcsh:Physics - Abstract
While Blockchain technology is universally considered as a significant technology for the near future, some of its pillars are under a threat of another thriving technology, Quantum Computing. In this paper, we propose important safeguard measures against this threat by developing a framework of a quantum-secured, permissioned blockchain called Logicontract (LC). LC adopts a digital signature scheme based on Quantum Key Distribution (QKD) mechanisms and a vote-based consensus algorithm to achieve consensus on the blockchain. The main contribution of this paper is in the development of: (1) unconditionally secure signature scheme for LC which makes it immune to the attack of quantum computers, (2) scalable consensus protocol used by LC, (3) logic-based scripting language for the creation of smart contracts on LC, (4) quantum-resistant lottery protocol which illustrates the power and usage of LC.
- Published
- 2019
28. Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks
- Author
-
Giorgio Gosti, Giancarlo Ruocco, Viola Folli, and Marco Leonetti
- Subjects
pattern storage ,Computer science ,Hopfield neural networks ,General Physics and Astronomy ,lcsh:Astrophysics ,Topology ,01 natural sciences ,Article ,Hopfield network ,03 medical and health sciences ,0302 clinical medicine ,Redundancy (information theory) ,0103 physical sciences ,lcsh:QB460-466 ,recurrent neural networks ,010306 general physics ,lcsh:Science ,Artificial neural network ,Hamming distance ,Content-addressable memory ,lcsh:QC1-999 ,Recurrent neural network ,Thermodynamic limit ,lcsh:Q ,State (computer science) ,030217 neurology & neurosurgery ,lcsh:Physics - Abstract
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14 N , as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14 N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.
- Published
- 2019
29. Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
- Author
-
Miroslav Voznak, Tran Trung Duy, Phuong T. Tran, Tan N. Nguyen, Nguyen Quang Sang, and Phu Tran Tin
- Subjects
energy harvesting ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Data_CODINGANDINFORMATIONTHEORY ,Article ,rateless codes ,Secure communication ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,transmit antenna selection ,lcsh:Science ,Rayleigh fading ,business.industry ,Network packet ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Co-channel interference ,020206 networking & telecommunications ,Eavesdropping ,lcsh:QC1-999 ,020201 artificial intelligence & image processing ,co-channel interference ,lcsh:Q ,hardware impairments ,business ,Decoding methods ,Computer hardware ,lcsh:Physics ,Data transmission ,Communication channel - Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Web of Science 21 7 art. no. 700
- Published
- 2019
30. Secrecy Enhancing Scheme for Spatial Modulation Using Antenna Selection and Artificial Noise
- Author
-
Pingping Shang, Xue-Qin Jiang, Weicheng Yu, Kai Zhang, and Sooyoung Kim
- Subjects
Scheme (programming language) ,secrecy rate ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Data_CODINGANDINFORMATIONTHEORY ,Interference (wave propagation) ,Article ,spatial modulation (SM) ,antenna selection ,0203 mechanical engineering ,Secrecy ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,lcsh:Science ,artificial noise (AN) ,computer.programming_language ,Computer Science::Cryptography and Security ,Computer Science::Information Theory ,Sequence ,020206 networking & telecommunications ,020302 automobile design & engineering ,physical layer security (PLS) ,channel state information (CSI) ,lcsh:QC1-999 ,Artificial noise ,lcsh:Q ,Imperfect ,Antenna (radio) ,computer ,lcsh:Physics ,Communication channel - Abstract
In this paper, we present a new secrecy-enhancing scheme for the spatial modulation (SM) system, by considering imperfect channel state information (CSI). In the proposed scheme, two antennas are activated at the same time. One of the activated antennas transmits information symbols along with artificial noise (AN) optimized under the imperfect CSI condition. On the other hand, the other activated antenna transmits another AN sequence. Because the AN are generated by exploiting the imperfect CSI of the legitimate channel, they can only be canceled at the legitimate receiver, while the passive eavesdropper will suffer from interference. We derive the secrecy rate of the proposed scheme in order to estimate the performance. The numerical results demonstrated in this paper verify that the proposed scheme can achieve a better secrecy rate compared to the conventional scheme at the same effective data rate.
- Published
- 2019
31. Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
- Author
-
Alexander Marshak and Alfonso Delgado-Bonal
- Subjects
Source code ,Theoretical computer science ,Computer science ,media_common.quotation_subject ,Computation ,approximate entropy ,General Physics and Astronomy ,lcsh:Astrophysics ,Review ,02 engineering and technology ,sample entropy ,Information theory ,Approximate entropy ,Chaos theory ,03 medical and health sciences ,0302 clinical medicine ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,media_common ,information theory ,lcsh:QC1-999 ,Sample entropy ,020201 artificial intelligence & image processing ,chaos theory ,lcsh:Q ,030217 neurology & neurosurgery ,lcsh:Physics - Abstract
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete guideline of the theory and application of the algorithms, intended to explain their characteristics in detail to researchers from different fields. While initially developed for physiological applications, both algorithms have been used in other fields such as medicine, telecommunications, economics or Earth sciences. In this paper, we explain the theoretical aspects involving Information Theory and Chaos Theory, provide simple source codes for their computation, and illustrate the techniques with a step by step example of how to use the algorithms properly. This paper is not intended to be an exhaustive review of all previous applications of the algorithms but rather a comprehensive tutorial where no previous knowledge is required to understand the methodology.
- Published
- 2019
32. An Algorithm of Image Encryption Using Logistic and Two-Dimensional Chaotic Economic Maps
- Author
-
A. A. Karawia, Fatemah S. Al-Ammar, A. Al-khedhairi, and Sameh S. Askar
- Subjects
Security analysis ,Computer science ,Chaotic ,General Physics and Astronomy ,68U10 ,Cryptography ,lcsh:Astrophysics ,02 engineering and technology ,Encryption ,Article ,logistic map ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,lcsh:Science ,security analysis ,chaotic economic map ,image encryption ,image decryption ,Computer Science::Cryptography and Security ,Pixel ,business.industry ,Key space ,020206 networking & telecommunications ,68P25 ,lcsh:QC1-999 ,020201 artificial intelligence & image processing ,lcsh:Q ,Logistic map ,business ,Algorithm ,94A60 ,lcsh:Physics - Abstract
In the literature, there are many image encryption algorithms that have been constructed based on different chaotic maps. However, those algorithms do well in the cryptographic process, but still, some developments need to be made in order to enhance the security level supported by them. This paper introduces a new cryptographic algorithm that depends on a logistic and two-dimensional chaotic economic map. The robustness of the introduced algorithm is shown by implementing it on several types of images. The implementation of the algorithm and its security are partially analyzed using some statistical analyses such as sensitivity to the key space, pixels correlation, the entropy process, and contrast analysis. The results given in this paper and the comparisons performed have led us to decide that the introduced algorithm is characterized by a large space of key security, sensitivity to the secret key, few coefficients of correlation, a high contrast, and accepted information of entropy. In addition, the results obtained in experiments show that our proposed algorithm resists statistical, differential, brute-force, and noise attacks.
- Published
- 2019
33. Optimized Adaptive Local Iterative Filtering Algorithm Based on Permutation Entropy for Rolling Bearing Fault Diagnosis
- Author
-
Yi Zhang, Cancan Yi, and Yong Lv
- Subjects
Computer science ,Feature extraction ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Fault (power engineering) ,01 natural sciences ,Article ,law.invention ,law ,Aliasing ,0103 physical sciences ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,permutation entropy ,lcsh:Science ,010301 acoustics ,Bearing (mechanical) ,particle swarm optimization ,020208 electrical & electronic engineering ,adaptive local iterative filtering ,Particle swarm optimization ,Filter (signal processing) ,fault diagnosis ,lcsh:QC1-999 ,Feature (computer vision) ,lcsh:Q ,Decomposition method (constraint satisfaction) ,Algorithm ,lcsh:Physics - Abstract
The characteristics of the early fault signal of the rolling bearing are weak and this leads to difficulties in feature extraction. In order to diagnose and identify the fault feature from the bearing vibration signal, an adaptive local iterative filter decomposition method based on permutation entropy is proposed in this paper. As a new time-frequency analysis method, the adaptive local iterative filtering overcomes two main problems of mode decomposition, comparing traditional methods: modal aliasing and the number of components is uncertain. However, there are still some problems in adaptive local iterative filtering, mainly the selection of threshold parameters and the number of components. In this paper, an improved adaptive local iterative filtering algorithm based on particle swarm optimization and permutation entropy is proposed. Firstly, particle swarm optimization is applied to select threshold parameters and the number of components in ALIF. Then, permutation entropy is used to evaluate the mode components we desire. In order to verify the effectiveness of the proposed method, the numerical simulation and experimental data of bearing failure are analyzed.
- Published
- 2018
34. On the Classical Capacity of General Quantum Gaussian Measurement
- Author
-
Alexander S. Holevo
- Subjects
Gaussian maximizer ,Computer science ,Gaussian ,Concatenation ,Structure (category theory) ,General Physics and Astronomy ,Gaussian measurement channel ,lcsh:Astrophysics ,02 engineering and technology ,Quantum channel ,01 natural sciences ,Article ,010305 fluids & plasmas ,Classical capacity ,symbols.namesake ,lcsh:QB460-466 ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Statistical physics ,classical capacity ,lcsh:Science ,Quantum ,Computer Science::Information Theory ,accessible information ,Mode (statistics) ,lcsh:QC1-999 ,symbols ,lcsh:Q ,020201 artificial intelligence & image processing ,lcsh:Physics ,Gaussian ensemble ,Communication channel - Abstract
In this paper, we consider the classical capacity problem for Gaussian measurement channels. We establish Gaussianity of the average state of the optimal ensemble in the general case and discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble. Then, we consider the case of one mode in detail, including the dual problem of accessible information of a Gaussian ensemble. Our findings are relevant to practical situations in quantum communications where the receiver is Gaussian (say, a general-dyne detection) and concatenation of the Gaussian channel and the receiver can be considered as one Gaussian measurement channel. Our efforts in this and preceding papers are then aimed at establishing full Gaussianity of the optimal ensemble (usually taken as an assumption) in such schemes.
- Published
- 2021
35. Integrate Candidate Answer Extraction with Re-Ranking for Chinese Machine Reading Comprehension
- Author
-
Junjie Zeng, Sun Xiaoya, Li Xinmeng, and Qi Zhang
- Subjects
Computer science ,extraction-based machine reading comprehension ,General Physics and Astronomy ,lcsh:Astrophysics ,answer re-ranking ,02 engineering and technology ,computer.software_genre ,Article ,Field (computer science) ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,lcsh:QB460-466 ,pre-training language model ,0202 electrical engineering, electronic engineering, information engineering ,Polysemy ,lcsh:Science ,self-attention ,business.industry ,Pipeline (software) ,lcsh:QC1-999 ,Comprehension ,Reading comprehension ,030221 ophthalmology & optometry ,lcsh:Q ,020201 artificial intelligence & image processing ,Artificial intelligence ,Language model ,business ,computer ,lcsh:Physics ,Word (computer architecture) ,Natural language processing - Abstract
Machine Reading Comprehension (MRC) research concerns how to endow machines with the ability to understand given passages and answer questions, which is a challenging problem in the field of natural language processing. To solve the Chinese MRC task efficiently, this paper proposes an Improved Extraction-based Reading Comprehension method with Answer Re-ranking (IERC-AR), consisting of a candidate answer extraction module and a re-ranking module. The candidate answer extraction module uses an improved pre-training language model, RoBERTa-WWM, to generate precise word representations, which can solve the problem of polysemy and is good for capturing Chinese word-level features. The re-ranking module re-evaluates candidate answers based on a self-attention mechanism, which can improve the accuracy of predicting answers. Traditional machine-reading methods generally integrate different modules into a pipeline system, which leads to re-encoding problems and inconsistent data distribution between the training and testing phases, therefore, this paper proposes an end-to-end model architecture for IERC-AR to reasonably integrate the candidate answer extraction and re-ranking modules. The experimental results on the Les MMRC dataset show that IERC-AR outperforms state-of-the-art MRC approaches.
- Published
- 2021
36. An Extended FMEA Model Based on Cumulative Prospect Theory and Type-2 Intuitionistic Fuzzy VIKOR for the Railway Train Risk Prioritization
- Author
-
Yong Fu, Xinwang Liu, Yong Qin, Limin Jia, and Weizhong Wang
- Subjects
cumulative prospect theory ,Risk analysis ,Computer science ,0211 other engineering and technologies ,General Physics and Astronomy ,railway train operation ,lcsh:Astrophysics ,02 engineering and technology ,Article ,Bogie ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,Fuzzy number ,lcsh:Science ,Risk management ,021103 operations research ,Cumulative prospect theory ,business.industry ,lcsh:QC1-999 ,Reliability engineering ,lcsh:Q ,020201 artificial intelligence & image processing ,Train ,business ,Failure mode and effects analysis ,risk prioritization ,type-2 IFNs ,VIKOR ,lcsh:Physics - Abstract
This paper aims toward the improvement of the limitations of traditional failure mode and effect analysis (FMEA) and examines the crucial failure modes and components for railway train operation. In order to overcome the drawbacks of current FMEA, this paper proposes a novel risk prioritization method based on cumulative prospect theory and type-2 intuitionistic fuzzy VIKOR approach. Type-2 intuitionistic VIKOR handles the combination of the risk factors with their entropy weight. Triangular fuzzy number intuitionistic fuzzy numbers (TFNIFNs) applied as type-2 intuitionistic fuzzy numbers (Type-2 IFNs) are adopted to depict the uncertainty in the risk analysis. Then, cumulative prospect theory is employed to deal with the FMEA team member&rsquo, s risk sensitiveness and decision-making psychological behavior. Finally, a numerical example of the railway train bogie system is selected to illustrate the application and feasibility of the proposed extended FMEA model in this paper, and a comparison study is also performed to validate the practicability and effectiveness of the novel FMEA model. On this basis, this study can provide guidance for the risk prioritization of railway trains and indicate a direction for further research of risk management of rail traffic.
- Published
- 2020
37. Segmentation of High Dimensional Time-Series Data Using Mixture of Sparse Principal Component Regression Model with Information Complexity
- Author
-
Hamparsum Bozdogan and Yaojin Sun
- Subjects
Computer science ,information complexity criteria ,General Physics and Astronomy ,lcsh:Astrophysics ,high dimensional time-series ,Article ,Component (UML) ,lcsh:QB460-466 ,entropy-based robust EM ,Segmentation ,Time series ,lcsh:Science ,Fitness function ,business.industry ,Dimensionality reduction ,segmentation ,Sparse PCA ,sparse PCA ,Pattern recognition ,lcsh:QC1-999 ,mixture regression ,Variable (computer science) ,Principal component regression ,lcsh:Q ,Artificial intelligence ,business ,lcsh:Physics - Abstract
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our approach encompasses dimension reduction in high dimensional time-series data and, at the same time, determines the number of component clusters (i.e., number of segments across time-series data) and selects the best subset of predictors. A large-scale Monte Carlo simulation is performed to show the capability of the MIX-SPCR model to identify the correct structure of the time-series data successfully. MIX-SPCR model is also applied to a high dimensional Standard &, Poor&rsquo, s 500 (S&, P 500) index data to uncover the time-series&rsquo, s hidden structure and identify the structure change points. The approach presented in this paper determines both the relationships among the predictor variables and how various predictor variables contribute to the explanatory power of the response variable through the sparsity settings cluster wise.
- Published
- 2020
38. A Labeling Method for Financial Time Series Prediction Based on Trends
- Author
-
Dingming Wu, Xiaolong Wang, Buzhou Tang, Jingyong Su, and Shaocong Wu
- Subjects
Normalization (statistics) ,Standardization ,Computer science ,financial time series ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Article ,stock prediction ,010305 fluids & plasmas ,lcsh:QB460-466 ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Preprocessor ,labeling method ,Time series ,lcsh:Science ,Financial services ,Randomness ,business.industry ,Deep learning ,deep learning ,lcsh:QC1-999 ,machine learning ,lcsh:Q ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,Composite index ,business ,computer ,lcsh:Physics - Abstract
Time series prediction has been widely applied to the finance industry in applications such as stock market price and commodity price forecasting. Machine learning methods have been widely used in financial time series prediction in recent years. How to label financial time series data to determine the prediction accuracy of machine learning models and subsequently determine final investment returns is a hot topic. Existing labeling methods of financial time series mainly label data by comparing the current data with those of a short time period in the future. However, financial time series data are typically non-linear with obvious short-term randomness. Therefore, these labeling methods have not captured the continuous trend features of financial time series data, leading to a difference between their labeling results and real market trends. In this paper, a new labeling method called &ldquo, continuous trend labeling&rdquo, is proposed to address the above problem. In the feature preprocessing stage, this paper proposed a new method that can avoid the problem of look-ahead bias in traditional data standardization or normalization processes. Then, a detailed logical explanation given, a definition of continuous trend labeling was proposed and also an automatic labeling algorithm was given to extract the continuous trend features of financial time series data. Experiments on the Shanghai Composite Index and Shenzhen Component Index and some stocks of China showed that our labeling method is a much better state-of-the-art labeling method in terms of classification accuracy and some other classification evaluation metrics. The results of the paper also proved that deep learning models such as LSTM and GRU are more suitable for dealing with the prediction of financial time series data.
- Published
- 2020
39. Method for Measuring the Information Content of Terrain from Digital Elevation Models
- Author
-
Chunhua Zheng, Lujin Hu, Jiping Liu, and Zongyi He
- Subjects
Measurement method ,Geospatial analysis ,Generalization ,Computer science ,General Physics and Astronomy ,Terrain ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,lcsh:Astrophysics ,Terrain rendering ,computer.software_genre ,information content ,GeneralLiterature_MISCELLANEOUS ,lcsh:QC1-999 ,Content (measure theory) ,terrain ,lcsh:QB460-466 ,lcsh:Q ,Digital elevation model ,Representation (mathematics) ,lcsh:Science ,computer ,lcsh:Physics ,Remote sensing ,ComputingMethodologies_COMPUTERGRAPHICS ,digital elevation models - Abstract
As digital terrain models are indispensable for visualizing and modeling geographic processes, terrain information content is useful for terrain generalization and representation. For terrain generalization, if the terrain information is considered, the generalized terrain may be of higher fidelity. In other words, the richer the terrain information at the terrain surface, the smaller the degree of terrain simplification. Terrain information content is also important for evaluating the quality of the rendered terrain, e.g., the rendered web terrain tile service in Google Maps (Google Inc., Mountain View, CA, USA). However, a unified definition and measures for terrain information content have not been established. Therefore, in this paper, a definition and measures for terrain information content from Digital Elevation Model (DEM, i.e., a digital model or 3D representation of a terrain’s surface) data are proposed and are based on the theory of map information content, remote sensing image information content and other geospatial information content. The information entropy was taken as the information measuring method for the terrain information content. Two experiments were carried out to verify the measurement methods of the terrain information content. One is the analysis of terrain information content in different geomorphic types, and the results showed that the more complex the geomorphic type, the richer the terrain information content. The other is the analysis of terrain information content with different resolutions, and the results showed that the finer the resolution, the richer the terrain information. Both experiments verified the reliability of the measurements of the terrain information content proposed in this paper.
- Published
- 2015
40. On the Application of Entropy Measures with Sliding Window for Intrusion Detection in Automotive In-Vehicle Networks
- Author
-
Gianmarco Baldini
- Subjects
Spoofing attack ,cybersecurity ,Computer science ,General Physics and Astronomy ,lcsh:Astrophysics ,Denial-of-service attack ,02 engineering and technology ,Intrusion detection system ,computer.software_genre ,Approximate entropy ,Article ,Rényi entropy ,Sliding window protocol ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,lcsh:Science ,Controller Area Network (CAN) ,information entropy ,020206 networking & telecommunications ,Intrusion Detection System (IDS) ,lcsh:QC1-999 ,Sample entropy ,Controller Area Network(CAN) ,lcsh:Q ,020201 artificial intelligence & image processing ,Data mining ,in-vehicle network ,computer ,lcsh:Physics - Abstract
The evolution of modern automobiles to higher levels of connectivity and automatism has also increased the need to focus on the mitigation of potential cybersecurity risks. Researchers have proven in recent years that attacks on in-vehicle networks of automotive vehicles are possible and the research community has investigated various cybersecurity mitigation techniques and intrusion detection systems which can be adopted in the automotive sector. In comparison to conventional intrusion detection systems in large fixed networks and ICT infrastructures in general, in-vehicle systems have limited computing capabilities and other constraints related to data transfer and the management of cryptographic systems. In addition, it is important that attacks are detected in a short time-frame as cybersecurity attacks in vehicles can lead to safety hazards. This paper proposes an approach for intrusion detection of cybersecurity attacks in in-vehicle networks, which takes in consideration the constraints listed above. The approach is based on the application of an information entropy-based method based on a sliding window, which is quite efficient from time point of view, it does not require the implementation of complex cryptographic systems and it still provides a very high detection accuracy. Different entropy measures are used in the evaluation: Shannon Entropy, Renyi Entropy, Sample Entropy, Approximate Entropy, Permutation Entropy, Dispersion and Fuzzy Entropy. This paper evaluates the impact of the different hyperparameters present in the definition of entropy measures on a very large public data set of CAN-bus traffic with millions of CAN-bus messages with four different types of attacks: Denial of Service, Fuzzy Attack and two spoofing attacks related to RPM and Gear information. The sliding window approach in combination with entropy measures can detect attacks in a time-efficient way and with great accuracy for specific choices of the hyperparameters and entropy measures.
- Published
- 2020
41. The Convergence of a Cooperation Markov Decision Process System
- Author
-
Xiaoling Mo, Zufeng Fu, and Daoyun Xu
- Subjects
reinforcement learning ,Mathematical optimization ,Computer science ,multi-agent ,media_common.quotation_subject ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Article ,Bellman equation ,lcsh:QB460-466 ,0502 economics and business ,Convergence (routing) ,cooperation markov decision process ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Initial value problem ,lcsh:Science ,Function (engineering) ,media_common ,050210 logistics & transportation ,optimal pair of strategies ,Markov chain ,05 social sciences ,lcsh:QC1-999 ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,lcsh:Q ,020201 artificial intelligence & image processing ,Markov decision process ,Value (mathematics) ,lcsh:Physics - Abstract
In a general Markov decision progress system, only one agent&rsquo, s learning evolution is considered. However, considering the learning evolution of a single agent in many problems has some limitations, more and more applications involve multi-agent. There are two types of cooperation, game environment among multi-agent. Therefore, this paper introduces a Cooperation Markov Decision Process (CMDP) system with two agents, which is suitable for the learning evolution of cooperative decision between two agents. It is further found that the value function in the CMDP system also converges in the end, and the convergence value is independent of the choice of the value of the initial value function. This paper presents an algorithm for finding the optimal strategy pair (&pi, k0,&pi, k1) in the CMDP system, whose fundamental task is to find an optimal strategy pair and form an evolutionary system CMDP(&pi, k1). Finally, an example is given to support the theoretical results.
- Published
- 2020
42. Towards a More Realistic Citation Model: The Key Role of Research Team Sizes
- Author
-
Staša Milojević
- Subjects
FOS: Computer and information sciences ,preferential attachment ,Computer science ,FOS: Physical sciences ,General Physics and Astronomy ,lcsh:Astrophysics ,citation model ,050905 science studies ,Preferential attachment ,Computer Science::Digital Libraries ,Article ,Interpretation (model theory) ,cumulative advantage ,team science ,lcsh:QB460-466 ,Digital Libraries (cs.DL) ,lcsh:Science ,Instrumentation and Methods for Astrophysics (astro-ph.IM) ,GeneralLiterature_REFERENCE(e.g.,dictionaries,encyclopedias,glossaries) ,Social and Information Networks (cs.SI) ,05 social sciences ,Visibility (geometry) ,Computer Science - Digital Libraries ,Computer Science - Social and Information Networks ,Degree (music) ,lcsh:QC1-999 ,Field (geography) ,Team science ,Key (cryptography) ,lcsh:Q ,0509 other social sciences ,Astrophysics - Instrumentation and Methods for Astrophysics ,050904 information & library sciences ,Citation ,Mathematical economics ,lcsh:Physics - Abstract
We propose a new citation model which builds on the existing models that explicitly or implicitly include "direct" and "indirect" (learning about a cited paper's existence from references in another paper) citation mechanisms. Our model departs from the usual, unrealistic assumption of uniform probability of direct citation, in which initial differences in citation arise purely randomly. Instead, we demonstrate that a two-mechanism model in which the probability of direct citation is proportional to the number of authors on a paper (team size) is able to reproduce the empirical citation distributions of articles published in the field of astronomy remarkably well, and at different points in time. Interpretation of our model is that the intrinsic citation capacity, and hence the initial visibility of a paper, will be enhanced when more people are intimately familiar with some work, favoring papers from larger teams. While the intrinsic citation capacity cannot depend only on the team size, our model demonstrates that it must be to some degree correlated with it, and distributed in a similar way, i.e., having a power-law tail. Consequently, our team-size model qualitatively explains the existence of a correlation between the number of citations and the number of authors on a paper., Published in journal Entropy. Open access article available at https://www.mdpi.com/journal/entropy
- Published
- 2020
43. Enhancing Edge Attack Strategy via an OWA Operator-Based Ensemble Design in Real-World Networks
- Author
-
Hongfu Liu, Yuan Feng, Yuyuan Yang, Baoan Ren, and Chengyi Zeng
- Subjects
Attack strategy ,Computer science ,media_common.quotation_subject ,General Physics and Astronomy ,lcsh:Astrophysics ,Complex network ,lcsh:QC1-999 ,Article ,Adaptability ,OWA operator ,Important research ,network disintegration ,lcsh:QB460-466 ,lcsh:Q ,edge attack strategy ,structural similarity index ,lcsh:Science ,Algorithm ,lcsh:Physics ,media_common - Abstract
Network disintegration has been an important research hotspot in complex networks for a long time. From the perspective of node attack, researchers have devoted to this field and carried out numerous works. In contrast, the research on edge attack strategy is insufficient. This paper comprehensively evaluates the disintegration effect of each structural similarity index when they are applied to the weighted-edge attacks model. Experimental results show that the edge attack strategy based on a single similarity index will appear limited stability and adaptability. Thus, motivated by obtaining a stable disintegration effect, this paper designs an edge attack strategy based on the ordered weighted averaging (OWA) operator. Through final experimental results, we found that the edge attack strategy proposed in this paper not only achieves a more stable disintegration effect on eight real-world networks, but also significantly improves the disintegration effect when applied on a single network in comparison with the original similarity index.
- Published
- 2020
44. A Denoising Method for Fiber Optic Gyroscope Based on Variational Mode Decomposition and Beetle Swarm Antenna Search Algorithm
- Author
-
Menghao Wu, Chao Qin, Fan Zhang, Pengfei Wang, Guangchun Li, and Yanbin Gao
- Subjects
signal denoising ,fiber optic gyroscope ,Computer science ,Noise reduction ,variational mode decomposition ,General Physics and Astronomy ,lcsh:Astrophysics ,Probability density function ,02 engineering and technology ,01 natural sciences ,Signal ,Article ,beetle swarm antenna search algorithm ,Search algorithm ,lcsh:QB460-466 ,permutation entropy ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Inertial navigation system ,Noise (signal processing) ,010401 analytical chemistry ,020206 networking & telecommunications ,Fibre optic gyroscope ,lcsh:QC1-999 ,0104 chemical sciences ,Hausdorff distance ,lcsh:Q ,Algorithm ,lcsh:Physics - Abstract
Fiber optic gyroscope (FOG) is one of the important components of Inertial Navigation Systems (INS). In order to improve the accuracy of the INS, it is necessary to suppress the random error of the FOG signal. In this paper, a variational mode decomposition (VMD) denoising method based on beetle swarm antenna search (BSAS) algorithm is proposed to reduce the noise in FOG signal. Firstly, the BSAS algorithm is introduced in detail. Then, the permutation entropy of the band-limited intrinsic mode functions (BLIMFs) is taken as the optimization index, and two key parameters of VMD algorithm, including decomposition mode number K and quadratic penalty factor &alpha, are optimized by using the BSAS algorithm. Next, a new method based on Hausdorff distance (HD) between the probability density function (PDF) of all BLIMFs and that of the original signal is proposed in this paper to determine the relevant modes. Finally, the selected BLIMF components are reconstructed to get the denoised signal. In addition, the simulation results show that the proposed scheme is better than the existing schemes in terms of noise reduction performance. Two experiments further demonstrate the priority of the proposed scheme in the FOG noise reduction compared with other schemes.
- Published
- 2020
45. A Novel Hybrid Secure Image Encryption Based on the Shuffle Algorithm and the Hidden Attractor Chaos System
- Author
-
Xintao Duan, Xin Jin, Hang Jin, and Yuanyuan Ma
- Subjects
Differential cryptanalysis ,Computer science ,chaotic system ,DNA sequence ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Chaotic ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Encryption ,01 natural sciences ,Article ,lcsh:QB460-466 ,Computer Science::Multimedia ,0103 physical sciences ,Attractor ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,010301 acoustics ,security analysis ,Computer Science::Cryptography and Security ,Shuffling ,business.industry ,Key space ,020206 networking & telecommunications ,Plaintext ,image encryption ,lcsh:QC1-999 ,Computer Science::Computer Vision and Pattern Recognition ,Known-plaintext attack ,lcsh:Q ,shuffle algorithm ,business ,Algorithm ,lcsh:Physics - Abstract
Aiming at the problems of small key space, low security of encryption structure, and easy to crack existing image encryption algorithms combining chaotic system and DNA sequence, this paper proposes an image encryption algorithm based on a hidden attractor chaotic system and shuffling algorithm. Firstly, the chaotic sequence generated by the hidden attractor chaotic system is used to encrypt the image. The shuffling algorithm is used to scramble the image, and finally, the DNA sequence operation is used to diffuse the pixel value of the image. Experimental results show that the key space of the scheme reaches 2327 and is very sensitive to keys. The histogram of encrypted images is evenly distributed. The correlation coefficient of adjacent pixels is close to 0. The entropy values of encrypted images are all close to eight and the unified average change intensity (UACI) value and number of pixel changing rate (NPCR) value are close to ideal values. All-white and all-black image experiments meet the requirements. Experimental results show that the encryption scheme in this paper can effectively resist exhaustive attacks, statistical attacks, differential cryptanalysis, known plaintext and selected plaintext attacks, and noise attacks. The above research results show that the system has better encryption performance, and the proposed scheme is useful and practical in communication and can be applied to the field of image encryption.
- Published
- 2020
46. Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off
- Author
-
Sennur Ulukus, Deniz Gunduz, Emre Ozfatura, and Commission of the European Communities
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Coded computation ,Distributed computation ,Gradient coding ,Gradient descent ,Machine learning ,Parallel computing ,Polynomial codes ,Computer Science - Machine Learning ,Computer science ,Fluids & Plasmas ,Computer Science - Information Theory ,Computation ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Article ,gradient coding ,Machine Learning (cs.LG) ,Server ,lcsh:QB460-466 ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Signal Processing ,lcsh:Science ,01 Mathematical Sciences ,gradient descent ,0105 earth and related environmental sciences ,02 Physical Sciences ,parallel computing ,Information Theory (cs.IT) ,distributed computation ,020206 networking & telecommunications ,lcsh:QC1-999 ,polynomial codes ,machine learning ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer engineering ,coded computation ,lcsh:Q ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Distributed learning ,lcsh:Physics ,Coding (social sciences) - Abstract
When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers. Straggling workers can be tolerated by assigning redundant computations and coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two main drawbacks; over-computation due to inaccurate prediction of the straggling behaviour, and under-utilization due to treating workers as straggler/non-straggler and discarding partial computations carried out by stragglers. In this paper, to overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and design straggler avoidance techniques accordingly. Then, we analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency to minimize the overall latency. Furthermore, through extensive simulations, both model-based and real implementation on Amazon EC2 servers, we identify the advantages and disadvantages of these designs in different settings, and demonstrate that MMC can help improve upon existing straggler avoidance schemes., This paper was presented in part at the 2019 IEEE International Symposium on Information Theory (ISIT) in Paris, France, and at the 2019 IEEE Data Science Workshop in Minneapolis, USA
- Published
- 2020
47. Cross-Domain Recommendation Based on Sentiment Analysis and Latent Feature Mapping
- Author
-
Yongpeng Wang, Hong Yu, Guoyin Wang, and Yongfang Xie
- Subjects
Computer science ,Process (engineering) ,media_common.quotation_subject ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Recommender system ,Latent Dirichlet allocation ,Article ,Domain (software engineering) ,non-linear mapping ,symbols.namesake ,020204 information systems ,lcsh:QB460-466 ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Function (engineering) ,latent sentiment review feature ,media_common ,Information retrieval ,Orientation (computer vision) ,Sentiment analysis ,lcsh:QC1-999 ,sentiment analysis ,Multilayer perceptron ,symbols ,lcsh:Q ,020201 artificial intelligence & image processing ,cross-domain recommendation ,lcsh:Physics - Abstract
Cross-domain recommendation is a promising solution in recommendation systems by using relatively rich information from the source domain to improve the recommendation accuracy of the target domain. Most of the existing methods consider the rating information of users in different domains, the label information of users and items and the review information of users on items. However, they do not effectively use the latent sentiment information to find the accurate mapping of latent features in reviews between domains. User reviews usually include user&rsquo, s subjective views, which can reflect the user&rsquo, s preferences and sentiment tendencies to various attributes of the items. Therefore, in order to solve the cold-start problem in the recommendation process, this paper proposes a cross-domain recommendation algorithm (CDR-SAFM) based on sentiment analysis and latent feature mapping by combining the sentiment information implicit in user reviews in different domains. Different from previous sentiment research, this paper divides sentiment into three categories based on three-way decision ideas&mdash, namely, positive, negative and neutral&mdash, by conducting sentiment analysis on user review information. Furthermore, the Latent Dirichlet Allocation (LDA) is used to model the user&rsquo, s semantic orientation to generate the latent sentiment review features. Moreover, the Multilayer Perceptron (MLP) is used to obtain the cross domain non-linear mapping function to transfer the user&rsquo, s sentiment review features. Finally, this paper proves the effectiveness of the proposed CDR-SAFM framework by comparing it with existing recommendation algorithms in a cross-domain scenario on the Amazon dataset.
- Published
- 2020
48. A Low Complexity Near-Optimal Iterative Linear Detector for Massive MIMO in Realistic Radio Channels of 5G Communication Systems
- Author
-
Mohammed H. Alsharif, Mahmoud A. M. Albreem, and Sunghwan Kim
- Subjects
Computer science ,Iterative method ,Computation ,MIMO ,detection ,General Physics and Astronomy ,lcsh:Astrophysics ,02 engineering and technology ,Communications system ,Article ,Base station ,lcsh:QB460-466 ,massive MIMO ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,iterative matrix inversion methods ,020208 electrical & electronic engineering ,Detector ,020206 networking & telecommunications ,Inversion (meteorology) ,lcsh:QC1-999 ,QuaDRiGa ,Computer Science::Programming Languages ,lcsh:Q ,Algorithm ,lcsh:Physics ,5G ,Communication channel - Abstract
Massive multiple-input multiple-output (M-MIMO) is a substantial pillar in fifth generation (5G) mobile communication systems. Although the maximum likelihood (ML) detector attains the optimum performance, it has an exponential complexity. Linear detectors are one of the substitutions and they are comparatively simple to implement. Unfortunately, they sustain a considerable performance loss in high loaded systems. They also include a matrix inversion which is not hardware-friendly. In addition, if the channel matrix is singular or nearly singular, the system will be classified as an ill-conditioned and hence, the signal cannot be equalized. To defeat the inherent noise enhancement, iterative matrix inversion methods are used in the detectors&rsquo, design where approximate matrix inversion is replacing the exact computation. In this paper, we study a linear detector based on iterative matrix inversion methods in realistic radio channels called QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) package. Numerical results illustrate that the conjugate-gradient (CG) method is numerically robust and obtains the best performance with lowest number of multiplications. In the QuaDRiGA environment, iterative methods crave large n to obtain a pleasurable performance. This paper also shows that when the ratio between the user antennas and base station (BS) antennas ( &beta, ) is close to 1, iterative matrix inversion methods are not attaining a good detector&rsquo, s performance.
- Published
- 2020
49. Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD
- Author
-
Sohaib Manzoor, Sajid Khan, Khushbu Khalid Butt, and Guo Hui Li
- Subjects
modular addition ,scrambling plus diffusion (SPD) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Physics and Astronomy ,Binary number ,lcsh:Astrophysics ,security ,02 engineering and technology ,Encryption ,01 natural sciences ,Article ,Scrambling ,010309 optics ,SHA-512 ,Histogram ,lcsh:QB460-466 ,Computer Science::Multimedia ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Entropy (information theory) ,lcsh:Science ,Computer Science::Cryptography and Security ,Pixel ,business.industry ,Plain text ,Hexadecimal ,020207 software engineering ,image encryption ,computer.file_format ,lcsh:QC1-999 ,Computer Science::Computer Vision and Pattern Recognition ,lcsh:Q ,entropy ,business ,Algorithm ,computer ,lcsh:Physics - Abstract
Bit-level and pixel-level methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixel-level permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bit-level permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their time-consuming nature, which is owed to bit-level computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bit-plane scrambling and an SPD diffusion technique for the bit-planes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA-512 are also used, along with the adaptive block-based modular addition of pixels to encrypt the images. To prove the first-rate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many well-known attacks, in particular, brute-force attacks, known/chosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others.
- Published
- 2020
50. Identification of Functional Bioprocess Model for Recombinant E. Coli Cultivation Process
- Author
-
Urniežius, Renaldas, Survyla, Arnas, and MDPI AG (Basel, Switzerland)
- Subjects
0106 biological sciences ,Gray box testing ,Kullback–Leibler divergence ,Artificial neural network ,Computer science ,Estimation theory ,relative entropy ,System identification ,General Physics and Astronomy ,02 engineering and technology ,01 natural sciences ,Article ,stoichiometry ,numerical convex optimization ,Identification (information) ,Robustness (computer science) ,010608 biotechnology ,0202 electrical engineering, electronic engineering, information engineering ,gray box ,microbial cultivation ,020201 artificial intelligence & image processing ,Bioprocess ,parameter estimation ,Biological system - Abstract
The purpose of this study is to introduce an improved Luedeking&ndash, Piret model that represents a structurally simple biomass concentration approach. The developed routine provides acceptable accuracy when fitting experimental data that incorporate the target protein concentration of Escherichia coli culture BL21 (DE3) pET28a in fed-batch processes. This paper presents system identification, biomass, and product parameter fitting routines, starting from their roots of origin to the entropy-related development, characterized by robustness and simplicity. A single tuning coefficient allows for the selection of an optimization criterion that serves equally well for higher and lower biomass concentrations. The idea of the paper is to demonstrate that the use of fundamental knowledge can make the general model more common for technological use compared to a sophisticated artificial neural network. Experimental validation of the proposed model involved data analysis of six cultivation experiments compared to 19 experiments used for model fitting and parameter estimation.
- Published
- 2019
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.