14,110 results
Search Results
52. Text Matching in Insurance Question-Answering Community Based on an Integrated BiLSTM-TextCNN Model Fusing Multi-Feature.
- Author
-
Li, Zhaohui, Yang, Xueru, Zhou, Luli, Jia, Hongyu, and Li, Wenli
- Subjects
COMMUNITIES ,QUESTION answering systems ,NOUNS ,ARTIFICIAL intelligence ,KNOWLEDGE representation (Information theory) ,PARTS of speech - Abstract
Along with the explosion of ChatGPT, the artificial intelligence question-answering system has been pushed to a climax. Intelligent question-answering enables computers to simulate people's behavior habits of understanding a corpus through machine learning, so as to answer questions in professional fields. How to obtain more accurate answers to personalized questions in professional fields is the core content of intelligent question-answering research. As one of the key technologies of intelligent question-answering, the accuracy of text matching is related to the development of the intelligent question-answering community. Aiming to solve the problem of polysemy of text, the Enhanced Representation through Knowledge Integration (ERNIE) model is used to obtain the word vector representation of text, which makes up for the lack of prior knowledge in the traditional word vector representation model. Additionally, there are also problems of homophones and polyphones in Chinese, so this paper introduces the phonetic character sequence of the text to distinguish them. In addition, aiming at the problem that there are many proper nouns in the insurance field that are difficult to identify, after conventional part-of-speech tagging, proper nouns are distinguished by especially defining their parts of speech. After the above three types of text-based semantic feature extensions, this paper also uses the Bi-directional Long Short-Term Memory (BiLSTM) and TextCNN models to extract the global features and local features of the text, respectively. It can obtain the feature representation of the text more comprehensively. Thus, the text matching model integrating BiLSTM and TextCNN fusing Multi-Feature (namely MFBT) is proposed for the insurance question-answering community. The MFBT model aims to solve the problems that affect the answer selection in the insurance question-answering community, such as proper nouns, nonstandard sentences and sparse features. Taking the question-and-answer data of the insurance library as the sample, the MFBT text-matching model is compared and evaluated with other models. The experimental results show that the MFBT text-matching model has higher evaluation index values, including accuracy, recall and F1, than other models. The model trained by historical search data can better help users in the insurance question-and-answer community obtain the answers they need and improve their satisfaction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
53. Automatic P-Phase-Onset-Time-Picking Method of Microseismic Monitoring Signal of Underground Mine Based on Noise Reduction and Multiple Detection Indexes.
- Author
-
Dai, Rui, Wang, Yibo, Zhang, Da, and Ji, Hu
- Subjects
MINES & mineral resources ,MICROSEISMS ,NOISE control ,ELECTRONIC data processing ,KURTOSIS ,DATA analysis ,TUNGSTEN - Abstract
The underground pressure disaster caused by the exploitation of deep mineral resources has become a major hidden danger restricting the safe production of mines. Microseismic monitoring technology is a universally recognized means of underground pressure monitoring and early warning. In this paper, the wavelet coefficient threshold denoising method in the time–frequency domain, STA/LTA method, AIC method, and skew and kurtosis method are studied, and the automatic P-phase-onset-time-picking model based on noise reduction and multiple detection indexes is established. Through the effect analysis of microseismic signals collected by microseismic monitoring system of coral Tungsten Mine in Guangxi, automatic P-phase onset time picking is realized, the reliability of the P-phase-onset-time-picking method proposed in this paper based on noise reduction and multiple detection indexes is verified. The picking accuracy can still be guaranteed under the severe signal interference of background noise, power frequency interference and manual activity in the underground mine, which is of great significance to the data processing and analysis of microseismic monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
54. DAT-MT Accelerated Graph Fusion Dependency Parsing Model for Small Samples in Professional Fields.
- Author
-
Li, Rui, Shu, Shili, Wang, Shunli, Liu, Yang, Li, Yanhao, and Peng, Mingjun
- Subjects
DEEP learning ,PARSING (Computer grammar) ,INFORMATION technology ,TIME complexity ,KNOWLEDGE graphs ,INFORMATION overload ,FEATURE extraction - Abstract
The rapid development of information technology has made the amount of information in massive texts far exceed human intuitive cognition, and dependency parsing can effectively deal with information overload. In the background of domain specialization, the migration and application of syntactic treebanks and the speed improvement in syntactic analysis models become the key to the efficiency of syntactic analysis. To realize domain migration of syntactic tree library and improve the speed of text parsing, this paper proposes a novel approach—the Double-Array Trie and Multi-threading (DAT-MT) accelerated graph fusion dependency parsing model. It effectively combines the specialized syntactic features from small-scale professional field corpus with the generalized syntactic features from large-scale news corpus, which improves the accuracy of syntactic relation recognition. Aiming at the problem of high space and time complexity brought by the graph fusion model, the DAT-MT method is proposed. It realizes the rapid mapping of massive Chinese character features to the model's prior parameters and the parallel processing of calculation, thereby improving the parsing speed. The experimental results show that the unlabeled attachment score (UAS) and the labeled attachment score (LAS) of the model are improved by 13.34% and 14.82% compared with the model with only the professional field corpus and improved by 3.14% and 3.40% compared with the model only with news corpus; both indicators are better than DDParser and LTP 4 methods based on deep learning. Additionally, the method in this paper achieves a speedup of about 3.7 times compared to the method with a red-black tree index and a single thread. Efficient and accurate syntactic analysis methods will benefit the real-time processing of massive texts in professional fields, such as multi-dimensional semantic correlation, professional feature extraction, and domain knowledge graph construction. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
55. Research on Three-Phase Asynchronous Motor Fault Diagnosis Based on Multiscale Weibull Dispersion Entropy.
- Author
-
Xie, Fengyun, Sun, Enguang, Zhou, Shengtong, Shang, Jiandong, Wang, Yang, and Fan, Qiuyang
- Subjects
FAULT diagnosis ,ENTROPY ,PARTICLE swarm optimization ,FEATURE extraction ,SUPPORT vector machines ,WEIBULL distribution - Abstract
Three-phase asynchronous motors have a wide range of applications in the machinery industry and fault diagnosis aids in the healthy operation of a motor. In order to improve the accuracy and generalization of fault diagnosis in three-phase asynchronous motors, this paper proposes a three-phase asynchronous motor fault diagnosis method based on the combination of multiscale Weibull dispersive entropy (WB-MDE) and particle swarm optimization–support vector machine (PSO-SVM). Firstly, the Weibull distribution (WB) is used to linearize and smooth the vibration signals to obtain sharper information about the motor state. Secondly, the quantitative features of the regularity and orderliness of a given sequence are extracted using multiscale dispersion entropy (MDE). Then, a support vector machine (SVM) is used to construct a classifier, the parameters are optimized via the particle swarm optimization (PSO) algorithm, and the extracted feature vectors are fed into the optimized SVM model for classification and recognition. Finally, the accuracy and generalization of the model proposed in this paper are tested by adding raw data with Gaussian white noise with different signal-to-noise ratios and the CHIST-ERA SOON public dataset. This paper builds a three-phase asynchronous motor vibration signal experimental platform, through a piezoelectric acceleration sensor to discern the four states of the motor data, to verify the effectiveness of the proposed method. The accuracy of the collected data using the WB-MDE method proposed in this paper for feature extraction and the extracted features using the optimization of the PSO-SVM method for fault classification and identification is 100%. Additionally, the proposed model is tested for noise resistance and generalization. Finally, the superiority of the present method is verified through experiments as well as noise immunity and generalization tests. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
56. A Joint Extraction Model for Entity Relationships Based on Span and Cascaded Dual Decoding.
- Author
-
Liao, Tao, Sun, Haojie, and Zhang, Shunxiang
- Subjects
LANGUAGE models - Abstract
The entity–relationship joint extraction model plays a significant role in entity relationship extraction. The existing entity–relationship joint extraction model cannot effectively identify entity–relationship triples in overlapping relationships. This paper proposes a new joint entity–relationship extraction model based on the span and a cascaded dual decoding. The model includes a Bidirectional Encoder Representations from Transformers (BERT) encoding layer, a relational decoding layer, and an entity decoding layer. The model first converts the text input into the BERT pretrained language model into word vectors. Then, it divides the word vectors based on the span to form a span sequence and decodes the relationship between the span sequence to obtain the relationship type in the span sequence. Finally, the entity decoding layer fuses the span sequences and the relationship type obtained by relation decoding and uses a bi-directional long short-term memory (Bi-LSTM) neural network to obtain the head entity and tail entity in the span sequence. Using the combination of span division and cascaded double decoding, the overlapping relations existing in the text can be effectively identified. Experiments show that compared with other baseline models, the F1 value of the model is effectively improved on the NYT dataset and WebNLG dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
57. Physical-Layer Security, Quantum Key Distribution, and Post-Quantum Cryptography.
- Author
-
Djordjevic, Ivan B.
- Subjects
QUANTUM cryptography ,CRYPTOGRAPHY ,FREE-space optical technology ,QUANTUM computers ,LINEAR network coding - Abstract
To solve for these problems, various schemes providing the perfect/unconditional security have been proposed, including physical-layer security (PLS), quantum key distribution (QKD), and post-quantum cryptography. Authors introduce the unambiguous state discrimination measurement and the photon-number-splitting attack against PM-QKD with imperfect phase randomization, demonstrating the rigorous security of decoy state PM-QKD with a discrete-phase randomization protocol. The topics addressed in this Special Issue include physical-layer security [[2]], quantum key distribution (QKD) [[2]], post-quantum cryptography [[6]], quantum-enhanced cryptography [[7]], stealth communication [[2]], and covert communication [[8]]. In the third article paper [[13]], authors introduce an open-destination MDI QKD network that provides security against untrusted relays and all detector side-channel attacks, in which all user users are capable of distributing keys with the help of other users. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
58. Non-Thermal Solar Wind Electron Velocity Distribution Function.
- Author
-
Yoon, Peter H., López, Rodrigo A., Salem, Chadi S., Bonnell, John W., and Kim, Sunjung
- Subjects
DISTRIBUTION (Probability theory) ,ELECTRON distribution ,SOLAR wind ,WIND speed ,PLASMA turbulence ,SPACE environment - Abstract
The quiet-time solar wind electrons feature non-thermal characteristics when viewed from the perspective of their velocity distribution functions. They typically have an appearance of being composed of a denser thermal "core" population plus a tenuous energetic "halo" population. At first, such a feature was empirically fitted with the kappa velocity space distribution function, but ever since the ground-breaking work by Tsallis, the space physics community has embraced the potential implication of the kappa distribution as reflecting the non-extensive nature of the space plasma. From the viewpoint of microscopic plasma theory, the formation of the non-thermal electron velocity distribution function can be interpreted in terms of the plasma being in a state of turbulent quasi-equilibrium. Such a finding brings forth the possible existence of a profound inter-relationship between the non-extensive statistical state and the turbulent quasi-equilibrium state. The present paper further develops the idea of solar wind electrons being in the turbulent equilibrium, but, unlike the previous model, which involves the electrostatic turbulence near the plasma oscillation frequency (i.e., Langmuir turbulence), the present paper considers the impact of transverse electromagnetic turbulence, particularly, the turbulence in the whistler-mode frequency range. It is found that the coupling of spontaneously emitted thermal fluctuations and the background turbulence leads to the formation of a non-thermal electron velocity distribution function of the type observed in the solar wind during quiet times. This demonstrates that the whistler-range turbulence represents an alternative mechanism for producing the kappa-like non-thermal distribution, especially close to the Sun and in the near-Earth space environment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
59. Why Does Cross-Sectional Analyst Coverage Incorporate Market-Wide Information?
- Author
-
Hou, Yunfei and Hu, Changsheng
- Subjects
MAXIMUM entropy method ,DISTRIBUTION (Probability theory) ,SECURITIES analysts ,EARNINGS forecasting - Abstract
This paper shows that the empirical distribution of cross-sectional analyst coverage in China's stock markets follows an exponential law in a given month from 2011 to 2020. The findings hold in both the emerging (Shanghai) and the developed market (Hong Kong). Moreover, the unique distribution parameter (i.e., mean) is directly related to the amount of market-wide information. Average analyst coverage exhibits a significant negative predictive power for stock-market uncertainty, highlighting the role of security analysts in diminishing the total uncertainty. The exponential law can be derived from the maximum entropy principle (MEP). When analysts, who are constrained by average ability in generating information (i.e., the first-order moment), strive to maximize the amount of market-wide information, this objective yields the exponential distribution. Contrary to the conventional wisdom that security analysts specialize in the generation of firm-specific information, empirical findings suggest that analysts primarily produce market-wide information for 25 countries. Nevertheless, it remains unclear why cross-sectional analyst coverage reflects market-wide information, this paper provides an entropy-based explanation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
60. A Kalman Filtering Algorithm for Measurement Interruption Based on Polynomial Interpolation and Taylor Expansion.
- Author
-
Cheng, Jianhua, Wang, Zili, Qi, Bing, and Wang, He
- Subjects
KALMAN filtering ,TAYLOR'S series ,INERTIAL navigation systems ,ADAPTIVE filters ,ALGORITHMS ,ANGLES ,LOCALIZATION (Mathematics) - Abstract
Combined SINS/GPS navigation systems have been widely used. However, when the traditional combined SINS/GPS navigation system travels between tall buildings, in the shade of trees, or through tunnels, the GPS encounters frequent signal blocking, which leads to the interruption of GPS signals, and as a result, the combined SINS/GPS-based navigation method degenerates into a pure inertial guidance system, which will lead to the accumulation of navigation errors. In this paper, an adaptive Kalman filtering algorithm based on polynomial fitting and a Taylor expansion is proposed. Through the navigation information output from the inertial guidance system, the polynomial interpolation method is used to construct the velocity equation and position equation of the carrier, and then the Taylor expansion is used to construct the virtual measurement at the moment of the GPS signal interruption, which can make up for the impact of the lack of measurement information on the combined SINS/GPS navigation system when the GPS signal is interrupted. The results of computer simulation experiments and road measurement tests based on the loosely combined SINS/GPS navigation system show that when the carrier faces a GPS signal interruption situation, compared with a combined SINS/GPS navigation algorithm that does not take any rescue measures, our proposed combined SINS/GPS navigation algorithm possesses a higher accuracy in the attitude angle estimation, a higher accuracy in the velocity estimation, and a higher accuracy in the positional localization, and the system possesses higher stability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
61. It Ain't Necessarily So: Ludwig Boltzmann's Darwinian Notion of Entropy.
- Author
-
Gimbel, Steven
- Subjects
SECOND law of thermodynamics ,HISTORY of physics ,PHYSICAL laws ,TOPOLOGICAL entropy - Abstract
Ludwig Boltzmann's move in his seminal paper of 1877, introducing a statistical understanding of entropy, was a watershed moment in the history of physics. The work not only introduced quantization and provided a new understanding of entropy, it challenged the understanding of what a law of nature could be. Traditionally, nomological necessity, that is, specifying the way in which a system must develop, was considered an essential element of proposed physical laws. Yet, here was a new understanding of the Second Law of Thermodynamics that no longer possessed this property. While it was a new direction in physics, in other important scientific discourses of that time—specifically Huttonian geology and Darwinian evolution, similar approaches were taken in which a system's development followed principles, but did so in a way that both provided a direction of time and allowed for non-deterministic, though rule-based, time evolution. Boltzmann referred to both of these theories, especially the work of Darwin, frequently. The possibility that Darwin influenced Boltzmann's thought in physics can be seen as being supported by Boltzmann's later writings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
62. Fault Diagnosis Method for Rolling Bearings Based on Grey Relation Degree.
- Author
-
Mao, Yulin, Xin, Jianghui, Zang, Liguo, Jiao, Jing, and Xue, Cheng
- Subjects
ROLLER bearings ,FAULT diagnosis ,HILBERT-Huang transform ,DIAGNOSIS methods ,LIFE cycles (Biology) ,ENTROPY - Abstract
Aiming at the difficult problem of extracting fault characteristics and the low accuracy of fault diagnosis throughout the full life cycle of rolling bearings, a fault diagnosis method for rolling bearings based on grey relation degree is proposed in this paper. Firstly, the subtraction-average-based optimizer is used to optimize the parameters of the variational mode decomposition algorithm. Secondly, the vibration signals of bearings are decomposed by using the optimized results, and the feature vector of the intrinsic mode function component corresponding to the minimum envelope entropy is extracted. Finally, the grey proximity and similarity relation degree based on standard distance entropy are weighted to calculate the grey comprehensive relation degree between the feature vector of vibration signals and each standard state. By comparing the results, the diagnosis of different fault states and degrees of rolling bearings is realized. The XJTU-SY dataset was used for experimentation, and the results show that the proposed method achieves a diagnostic accuracy of 95.24% and has better diagnosis performance compared to various algorithms. It provides a reference for the fault diagnosis of rolling bearings throughout the full life cycle. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
63. Opinion Models, Election Data, and Political Theory.
- Author
-
Gsänger, Matthias, Hösel, Volker, Mohamad-Klotzbach, Christoph, and Müller, Johannes
- Subjects
POLITICAL science ,ELECTIONS ,ELECTION forecasting ,STATISTICAL physics ,STOCHASTIC models ,STATISTICAL models - Abstract
A unifying setup for opinion models originating in statistical physics and stochastic opinion dynamics are developed and used to analyze election data. The results are interpreted in the light of political theory. We investigate the connection between Potts (Curie–Weiss) models and stochastic opinion models in the view of the Boltzmann distribution and stochastic Glauber dynamics. We particularly find that the q-voter model can be considered as a natural extension of the Zealot model, which is adapted by Lagrangian parameters. We also discuss weak and strong effects (also called extensive and nonextensive) continuum limits for the models. The results are used to compare the Curie–Weiss model, two q-voter models (weak and strong effects), and a reinforcement model (weak effects) in explaining electoral outcomes in four western democracies (United States, Great Britain, France, and Germany). We find that particularly the weak effects models are able to fit the data (Kolmogorov–Smirnov test) where the weak effects reinforcement model performs best (AIC). Additionally, we show how the institutional structure shapes the process of opinion formation. By focusing on the dynamics of opinion formation preceding the act of voting, the models discussed in this paper give insights both into the empirical explanation of elections as such, as well as important aspects of the theory of democracy. Therefore, this paper shows the usefulness of an interdisciplinary approach in studying real world political outcomes by using mathematical models. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
64. Genetic Algorithm for Feature Selection Applied to Financial Time Series Monotonicity Prediction: Experimental Cases in Cryptocurrencies and Brazilian Assets.
- Author
-
Contreras, Rodrigo Colnago, Xavier da Silva, Vitor Trevelin, Xavier da Silva, Igor Trevelin, Viana, Monique Simplicio, Santos, Francisco Lledo dos, Zanin, Rodrigo Bruno, Martins, Erico Fernandes Oliveira, and Guido, Rodrigo Capobianco
- Subjects
MACHINE learning ,GENETIC algorithms ,TIME series analysis ,CRYPTOCURRENCIES ,FEATURE selection ,INVESTORS ,ASSETS (Accounting) - Abstract
Since financial assets on stock exchanges were created, investors have sought to predict their future values. Currently, cryptocurrencies are also seen as assets. Machine learning is increasingly adopted to assist and automate investments. The main objective of this paper is to make daily predictions about the movement direction of financial time series through classification models, financial time series preprocessing methods, and feature selection with genetic algorithms. The target time series are Bitcoin, Ibovespa, and Vale. The methodology of this paper includes the following steps: collecting time series of financial assets; data preprocessing; feature selection with genetic algorithms; and the training and testing of machine learning models. The results were obtained by evaluating the models with the area under the ROC curve metric. For the best prediction models for Bitcoin, Ibovespa, and Vale, values of 0.61, 0.62, and 0.58 were obtained, respectively. In conclusion, the feature selection allowed the improvement of performance in most models, and the input series in the form of percentage variation obtained a good performance, although it was composed of fewer attributes in relation to the other sets tested. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
65. Federated Learning Backdoor Attack Based on Frequency Domain Injection.
- Author
-
Liu, Jiawang, Peng, Changgen, Tan, Weijie, and Shi, Chenghui
- Subjects
FEDERATED learning ,MACHINE learning ,IMAGE recognition (Computer vision) ,FOURIER transforms - Abstract
Federated learning (FL) is a distributed machine learning framework that enables scattered participants to collaboratively train machine learning models without revealing information to other participants. Due to its distributed nature, FL is susceptible to being manipulated by malicious clients. These malicious clients can launch backdoor attacks by contaminating local data or tampering with local model gradients, thereby damaging the global model. However, existing backdoor attacks in distributed scenarios have several vulnerabilities. For example, (1) the triggers in distributed backdoor attacks are mostly visible and easily perceivable by humans; (2) these triggers are mostly applied in the spatial domain, inevitably corrupting the semantic information of the contaminated pixels. To address these issues, this paper introduces a frequency-domain injection-based backdoor attack in FL. Specifically, by performing a Fourier transform, the trigger and the clean image are linearly mixed in the frequency domain, injecting the low-frequency information of the trigger into the clean image while preserving its semantic information. Experiments on multiple image classification datasets demonstrate that the attack method proposed in this paper is stealthier and more effective in FL scenarios compared to existing attack methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
66. Entropy of the Canonical Occupancy (Macro) State in the Quantum Measurement Theory.
- Author
-
Spalvieri, Arnaldo
- Subjects
QUANTUM theory ,QUANTUM measurement ,UNCERTAINTY (Information theory) ,QUANTUM states ,MULTINOMIAL distribution ,ENTROPY ,STATISTICAL mechanics - Abstract
The paper analyzes the probability distribution of the occupancy numbers and the entropy of a system at the equilibrium composed by an arbitrary number of non-interacting bosons. The probability distribution is obtained through two approaches: one involves tracing out the environment from a bosonic eigenstate of the combined environment and system of interest (the empirical approach), while the other involves tracing out the environment from the mixed state of the combined environment and system of interest (the Bayesian approach). In the thermodynamic limit, the two coincide and are equal to the multinomial distribution. Furthermore, the paper proposes to identify the physical entropy of the bosonic system with the Shannon entropy of the occupancy numbers, fixing certain contradictions that arise in the classical analysis of thermodynamic entropy. Finally, by leveraging an information-theoretic inequality between the entropy of the multinomial distribution and the entropy of the multivariate hypergeometric distribution, Bayesianism of information theory and empiricism of statistical mechanics are integrated into a common "infomechanical" framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
67. Artificial Intelligence and Computational Issues in Engineering Applications.
- Author
-
Grabowska, Karolina, Krzywanski, Jaroslaw, Sosnowski, Marcin, and Skrobek, Dorian
- Subjects
COMPUTATIONAL intelligence ,ARTIFICIAL intelligence ,DEEP learning ,ENGINEERING ,REINFORCEMENT learning ,FLUIDIZED-bed combustion ,CURVE fitting ,MASS transfer - Abstract
The experimental results presented in the paper and achieved using real datasets from Shanghai Telecom indicate that DQN-ESPA outperforms state-of-the-art algorithms such as the simulated annealing placement algorithm, Top-K placement algorithm, K-Means placement algorithm, and random placement algorithm. High-performance supercomputers and emerging computing clusters created in research and development centres are rapidly increasing available computing power, which scientists are eager to use to implement increasingly advanced computing methods [[1]]. Thus, computationally demanding artificial intelligence algorithms and computational fluid dynamics methods are used more widely to consider complex engineering issues and verify and provide new information on entropy or information theory concepts [[2]]. As can be seen above, the original research articles, as well as review articles focused on optimization by artificial intelligence (AI) algorithms on computational and entropy issues, have been submitted to the Special Issue. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
68. Transient GI/MSP/1/N Queue.
- Author
-
Chydzinski, Andrzej
- Subjects
TRANSIENT analysis ,STANDARD deviations - Abstract
A non-zero correlation between service times can be encountered in many real queueing systems. An attractive model for correlated service times is the Markovian service process, because it offers powerful fitting capabilities combined with analytical tractability. In this paper, a transient study of the queue length in a model with MSP services and a general distribution of interarrival times is performed. In particular, two theorems are proven: one on the queue length distribution at a particular time t, where t can be arbitrarily small or large, and another on the mean queue length at t. In addition to the theorems, multiple numerical examples are provided. They illustrate the development over time of the mean queue length and the standard deviation, along with the complete distribution, depending on the service correlation strength, initial system conditions, and the interarrival time variance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
69. Motor Fault Diagnosis Based on Convolutional Block Attention Module-Xception Lightweight Neural Network.
- Author
-
Xie, Fengyun, Fan, Qiuyang, Li, Gang, Wang, Yang, Sun, Enguang, and Zhou, Shengtong
- Subjects
CONVOLUTIONAL neural networks ,FAULT diagnosis ,DEEP learning ,MOTOR learning ,GRAYSCALE model - Abstract
Electric motors play a crucial role in self-driving vehicles. Therefore, fault diagnosis in motors is important for ensuring the safety and reliability of vehicles. In order to improve fault detection performance, this paper proposes a motor fault diagnosis method based on vibration signals. Firstly, the vibration signals of each operating state of the motor at different frequencies are measured with vibration sensors. Secondly, the characteristic of Gram image coding is used to realize the coding of time domain information, and the one-dimensional vibration signals are transformed into grayscale diagrams to highlight their features. Finally, the lightweight neural network Xception is chosen as the main tool, and the attention mechanism Convolutional Block Attention Module (CBAM) is introduced into the model to enforce the importance of the characteristic information of the motor faults and realize their accurate identification. Xception is a type of convolutional neural network; its lightweight design maintains excellent performance while significantly reducing the model's order of magnitude. Without affecting the computational complexity and accuracy of the network, the CBAM attention mechanism is added, and Gram's corner field is combined with the improved lightweight neural network. The experimental results show that this model achieves a better recognition effect and faster iteration speed compared with the traditional Convolutional Neural Network (CNN), ResNet, and Xception networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
70. Some Results for Double Cyclic Codes over F q + v F q + v 2 F q.
- Author
-
Deng, Tenghui and Yang, Jing
- Subjects
FINITE rings ,CODE generators ,FINITE fields ,POLYNOMIAL rings ,TWO-dimensional bar codes ,CYCLIC codes - Abstract
Let F q be a finite field with an odd characteristic. In this paper, we present a new result about double cyclic codes over a finite non-chain ring. Specifically, we study the double cyclic code over F q + v F q + v 2 F q with v 3 = v , which is isomorphic to F q × F q × F q . This study mainly involves generator polynomials and generator matrices. The generating polynomial of the dual code is also obtained. We show the relationship between the generating polynomials of the double cyclic codes and those of their dual codes. Finally, as an application of these results, we construct some optimal codes over F 3 . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
71. Robustness of Entanglement for Dicke-W and Greenberger-Horne-Zeilinger Mixed States.
- Author
-
Zhu, Ling-Hui, Zhu, Zhen, Lv, Guo-Lin, Ye, Chong-Qiang, and Chen, Xiao-Yu
- Subjects
QUANTUM entanglement ,QUANTUM states ,QUANTUM mechanics ,PROBABILITY theory ,WITNESSES - Abstract
Quantum entanglement is a fundamental characteristic of quantum mechanics, and understanding the robustness of entanglement across different mixed states is crucial for comprehending the entanglement properties of general quantum states. In this paper, the robustness of entanglement of Dicke–W and Greenberger–Horne–Zeilinger (GHZ) mixed states under different mixing ratios is calculated using the entanglement witness method. The robustnesses of entanglement of Dicke–W and GHZ mixed states are different when the probability ratio of Dicke to W is greater than 3 2 and less than 3 2 . For the probability of Dicke and W states greater than or equal to 3 2 , we study the robustness of entanglement of Dicke and GHZ mixed states and analyze and calculate their upper and lower bounds. For the probability of Dicke and W states less than 3 2 , we take the equal probability ratio of Dicke and W states as an example and calculate and analyze the upper and lower bounds of their robustness of entanglement in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
72. Golf Club Selection with AI-Based Game Planning.
- Author
-
Khazaeli, Mehdi and Javadpour, Leili
- Subjects
ARTIFICIAL intelligence ,ATHLETIC clubs ,DATA analytics ,GOLF ,GOLFERS - Abstract
In the dynamic realm of golf, where every swing can make the difference between victory and defeat, the strategic selection of golf clubs has become a crucial factor in determining the outcome of a game. Advancements in artificial intelligence have opened new avenues for enhancing the decision-making process, empowering golfers to achieve optimal performance on the course. In this paper, we introduce an AI-based game planning system that assists players in selecting the best club for a given scenario. The system considers factors such as distance, terrain, wind strength and direction, and quality of lie. A rule-based model provides the four best club options based on the player's maximum shot data for each club. The player picks a club, shot, and target and a probabilistic classification model identifies whether the shot represents a birdie opportunity, par zone, bogey zone, or worse. The results of our model show that taking into account factors such as terrain and atmospheric features increases the likelihood of a better shot outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
73. Nonparametric Expectile Shortfall Regression for Complex Functional Structure.
- Author
-
Alamari, Mohammed B., Almulhim, Fatimah A., Kaid, Zoulikha, and Laksaci, Ali
- Subjects
FINANCIAL risk ,COMPARATIVE studies - Abstract
This paper treats the problem of risk management through a new conditional expected shortfall function. The new risk metric is defined by the expectile as the shortfall threshold. A nonparametric estimator based on the Nadaraya–Watson approach is constructed. The asymptotic property of the constructed estimator is established using a functional time-series structure. We adopt some concentration inequalities to fit this complex structure and to precisely determine the convergence rate of the estimator. The easy implantation of the new risk metric is shown through real and simulated data. Specifically, we show the feasibility of the new model as a risk tool by examining its sensitivity to the fluctuation in financial time-series data. Finally, a comparative study between the new shortfall and the standard one is conducted using real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
74. Enhancing Security of Telemedicine Data: A Multi-Scroll Chaotic System for ECG Signal Encryption and RF Transmission.
- Author
-
Cárdenas-Valdez, José Ricardo, Ramírez-Villalobos, Ramón, Ramirez-Ubieta, Catherine, and Inzunza-Gonzalez, Everardo
- Subjects
ORTHOGONAL frequency division multiplexing ,DATA transmission systems ,DATA encryption ,AMPLITUDE modulation ,POWER amplifiers ,TELEMEDICINE ,EMAIL security - Abstract
Protecting sensitive patient data, such as electrocardiogram (ECG) signals, during RF wireless transmission is essential due to the increasing demand for secure telemedicine communications. This paper presents an innovative chaotic-based encryption system designed to enhance the security and integrity of telemedicine data transmission. The proposed system utilizes a multi-scroll chaotic system for ECG signal encryption based on master–slave synchronization. The ECG signal is encrypted by a master system and securely transmitted to a remote location, where it is decrypted by a slave system using an extended state observer. Synchronization between the master and slave is achieved through the Lyapunov criteria, which ensures system stability. The system also supports Orthogonal Frequency Division Multiplexing (OFDM) and adaptive n-quadrature amplitude modulation (n-QAM) schemes to optimize signal discretization. Experimental validations with a custom transceiver scheme confirmed the system's effectiveness in preventing channel overlap during 2.5 GHz transmissions. Additionally, a commercial RF Power Amplifier (RF-PA) for LTE applications and a development board were integrated to monitor transmission quality. The proposed encryption system ensures robust and efficient RF transmission of ECG data, addressing critical challenges in the wireless communication of sensitive medical information. This approach demonstrates the potential for broader applications in modern telemedicine environments, providing a reliable and efficient solution for the secure transmission of healthcare data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
75. An MLWE-Based Cut-and-Choose Oblivious Transfer Protocol.
- Author
-
Tang, Yongli, Guo, Menghao, Huo, Yachao, Zhao, Zongqu, Yu, Jinxia, and Qin, Baodong
- Subjects
POLYNOMIAL rings ,COMPUTATIONAL complexity ,MULTIPLICATION ,POLYNOMIALS - Abstract
The existing lattice-based cut-and-choose oblivious transfer protocol is constructed based on the learning-with-errors (LWE) problem, which generally has the problem of inefficiency. An efficient cut-and-choose oblivious transfer protocol is proposed based on the difficult module-learning-with-errors (MLWE) problem. Compression and decompression techniques are introduced in the LWE-based dual-mode encryption system to improve it to an MLWE-based dual-mode encryption framework, which is applied to the protocol as an intermediate scheme. Subsequently, the security and efficiency of the protocol are analysed, and the security of the protocol can be reduced to the shortest independent vector problem (SIVP) on the lattice, which is resistant to quantum attacks. Since the whole protocol relies on the polynomial ring of elements to perform operations, the efficiency of polynomial modulo multiplication can be improved by using fast Fourier transform (FFT). Finally, this paper compares the protocol with an LWE-based protocol in terms of computational and communication complexities. The analysis results show that the protocol reduces the computation and communication overheads by at least a factor of n while maintaining the optimal number of communication rounds under malicious adversary attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
76. Sampled-Data Exponential Synchronization of Complex Dynamical Networks with Saturating Actuators.
- Author
-
Guo, Runan and Lv, Wenshun
- Subjects
OPTIMIZATION algorithms ,DYNAMICAL systems ,SYNCHRONIZATION ,ACTUATORS ,MEMORY - Abstract
This paper investigates the problem of exponential synchronization control for complex dynamical systems (CDNs) with input saturation. Considering the effects of transmission delay, a memory sampled-data controller is designed. A modified two-sided looped functional is constructed that takes into account the entire sampling period, which includes both current state information and delayed state information. This functional only needs to be positive definite at the sampling instants. Sufficient criteria and the controller design method are provided to ensure the exponential synchronization of CDNs with input saturation under the influence of transmission delay, as well as the estimation of the basin of attraction. Additionally, an optimization algorithm for enlarging the region of attraction is proposed. Finally, a numerical example is presented to verify the effectiveness of the conclusion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
77. Information Thermodynamics: From Physics to Neuroscience.
- Author
-
Karbowski, Jan
- Subjects
STATISTICAL physics ,NONEQUILIBRIUM thermodynamics ,COMPUTATIONAL neuroscience ,PARTICLE motion ,INFORMATION theory - Abstract
This paper provides a perspective on applying the concepts of information thermodynamics, developed recently in non-equilibrium statistical physics, to problems in theoretical neuroscience. Historically, information and energy in neuroscience have been treated separately, in contrast to physics approaches, where the relationship of entropy production with heat is a central idea. It is argued here that also in neural systems, information and energy can be considered within the same theoretical framework. Starting from basic ideas of thermodynamics and information theory on a classic Brownian particle, it is shown how noisy neural networks can infer its probabilistic motion. The decoding of the particle motion by neurons is performed with some accuracy, and it has some energy cost, and both can be determined using information thermodynamics. In a similar fashion, we also discuss how neural networks in the brain can learn the particle velocity and maintain that information in the weights of plastic synapses from a physical point of view. Generally, it is shown how the framework of stochastic and information thermodynamics can be used practically to study neural inference, learning, and information storing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
78. Design of Low-Latency Layered Normalized Minimum Sum Low-Density Parity-Check Decoding Based on Entropy Feature for NAND Flash-Memory Channel.
- Author
-
Li, Yingge and Hu, Haihua
- Subjects
ITERATIVE decoding ,DECODING algorithms ,ENTROPY ,INFORMATION processing ,MOTIVATION (Psychology) ,OPTICAL disks - Abstract
As high-speed big-data communications impose new requirements on storage latency, low-density parity-check (LDPC) codes have become a widely used technology in flash-memory channels. However, the iterative LDPC decoding algorithm faces a high decoding latency problem due to its mechanism based on iterative message transmission. Motivated by the unbalanced bit reliability of codeword, this paper proposes two technologies, i.e., serial entropy feature-based layered normalized min-sum (S-EFB-LNMS) decoding and parallel entropy feature-based layered normalized min-sum (P-EFB-LNMS) decoding. First, we construct an entropy feature vector that reflects the real-time bit reliability of the codeword. Then, the reliability of the output information of the layered processing unit (LPU) is evaluated by analyzing the similarity between the check matrix and the entropy feature vector. Based on this evaluation, we can dynamically allocate and schedule LPUs during the decoding iteration process, thereby optimizing the entire decoding process. Experimental results show that these techniques can significantly reduce decoding latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
79. The Application of Tsallis Entropy Based Self-Adaptive Algorithm for Multi-Threshold Image Segmentation.
- Author
-
Zhang, Kailong, He, Mingyue, Dong, Lijie, and Ou, Congjie
- Subjects
UNCERTAINTY (Information theory) ,IMAGE segmentation ,INFRARED imaging ,REMOTE sensing ,COMPUTED tomography - Abstract
Tsallis entropy has been widely used in image thresholding because of its non-extensive properties. The non-extensive parameter q contained in this entropy plays an important role in various adaptive algorithms and has been successfully applied in bi-level image thresholding. In this paper, the relationships between parameter q and pixels' long-range correlations have been further studied within multi-threshold image segmentation. It is found that the pixels' correlations are remarkable and stable for images generated by a known physical principle, such as infrared images, medical CT images, and color satellite remote sensing images. The corresponding non-extensive parameter q can be evaluated by using the self-adaptive Tsallis entropy algorithm. The results of this algorithm are compared with those of the Shannon entropy algorithm and the original Tsallis entropy algorithm in terms of quantitative image quality evaluation metrics PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity). Furthermore, we observed that for image series with the same background, the q values determined by the adaptive algorithm are consistently kept in a narrow range. Therefore, similar or identical scenes during imaging would produce similar strength of long-range correlations, which provides potential applications for unsupervised image processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
80. Probing Asymmetric Interactions with Time-Separated Mutual Information: A Case Study Using Golden Shiners.
- Author
-
Daftari, Katherine, Mayo, Michael L., Lemasson, Bertrand H., Biedenbach, James M., and Pilkiewicz, Kevin R.
- Subjects
INFORMATION theory ,CYPRINIDAE ,COLLECTIVE behavior ,TIME series analysis ,ANIMAL behavior - Abstract
Leader–follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available from a time series of animal trajectories collected in the field or from experiments. In this paper, we use a generalized leader–follower model to argue that the time-separated mutual information between two organism positions can serve as an alternative metric for capturing asymmetric correlations that is much less data intensive and more accurately estimated by popular k-nearest neighbor algorithms than transfer entropy. Our model predicts a local maximum of this mutual information at a time separation value corresponding to the fundamental reaction timescale of the follower organism. We confirm this prediction by analyzing time series trajectories recorded for a pair of golden shiner fish circling an annular tank. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
81. Bifurcation Diagrams of Nonlinear Oscillatory Dynamical Systems: A Brief Review in 1D, 2D and 3D.
- Author
-
Marszalek, Wieslaw and Walczak, Maciej
- Subjects
ELECTRIC arc ,BIFURCATION diagrams ,DYNAMICAL systems ,MATHEMATICAL models ,STEADY-state responses ,NONLINEAR dynamical systems ,LYAPUNOV exponents - Abstract
We discuss 1D, 2D and 3D bifurcation diagrams of two nonlinear dynamical systems: an electric arc system having both chaotic and periodic steady-state responses and a cytosolic calcium system with both periodic/chaotic and constant steady-state outputs. The diagrams are mostly obtained by using the 0–1 test for chaos, but other types of diagrams are also mentioned; for example, typical 1D diagrams with local maxiumum values of oscillatory responses (periodic and chaotic), the entropy method and the largest Lyapunov exponent approach. Important features and properties of each of the three classes of diagrams with one, two and three varying parameters in the 1D, 2D and 3D cases, respectively, are presented and illustrated via certain diagrams of the K values, − 1 ≤ K ≤ 1 , from the 0–1 test and the sample entropy values S a E n > 0 . The K values close to 0 indicate periodic and quasi-periodic responses, while those close to 1 are for chaotic ones. The sample entropy 3D diagrams for an electric arc system are also provided to illustrate the variety of possible bifurcation diagrams available. We also provide a comparative study of the diagrams obtained using different methods with the goal of obtaining diagrams that appear similar (or close to each other) for the same dynamical system. Three examples of such comparisons are provided, each in the 1D, 2D and 3D cases. Additionally, this paper serves as a brief review of the many possible types of diagrams one can employ to identify and classify time-series obtained either as numerical solutions of models of nonlinear dynamical systems or recorded in a laboratory environment when a mathematical model is unknown. In the concluding section, we present a brief overview of the advantages and disadvantages of using the 1D, 2D and 3D diagrams. Several illustrative examples are included. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
82. Optimum Achievable Rates in Two Random Number Generation Problems with f -Divergences Using Smooth Rényi Entropy †.
- Author
-
Nomura, Ryo and Yagi, Hideki
- Subjects
RENYI'S entropy ,INFORMATION theory ,CONVEX functions ,DISTRIBUTION (Probability theory) - Abstract
Two typical fixed-length random number generation problems in information theory are considered for general sources. One is the source resolvability problem and the other is the intrinsic randomness problem. In each of these problems, the optimum achievable rate with respect to the given approximation measure is one of our main concerns and has been characterized using two different information quantities: the information spectrum and the smooth Rényi entropy. Recently, optimum achievable rates with respect to f-divergences have been characterized using the information spectrum quantity. The f-divergence is a general non-negative measure between two probability distributions on the basis of a convex function f. The class of f-divergences includes several important measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence, it is meaningful to consider the random number generation problems with respect to f-divergences. However, optimum achievable rates with respect to f-divergences using the smooth Rényi entropy have not been clarified yet in both problems. In this paper, we try to analyze the optimum achievable rates using the smooth Rényi entropy and to extend the class of f-divergence. To do so, we first derive general formulas of the first-order optimum achievable rates with respect to f-divergences in both problems under the same conditions as imposed by previous studies. Next, we relax the conditions on f-divergence and generalize the obtained general formulas. Then, we particularize our general formulas to several specified functions f. As a result, we reveal that it is easy to derive optimum achievable rates for several important measures from our general formulas. Furthermore, a kind of duality between the resolvability and the intrinsic randomness is revealed in terms of the smooth Rényi entropy. Second-order optimum achievable rates and optimistic achievable rates are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
83. Lattice Boltzmann Simulation of Spatial Fractional Convection–Diffusion Equation.
- Author
-
Bi, Xiaohua and Wang, Huimin
- Subjects
LATTICE Boltzmann methods ,FRACTIONAL differential equations ,PARTIAL differential equations ,ADVECTION-diffusion equations ,COMPUTER simulation ,EQUATIONS ,TRANSPORT equation - Abstract
The space fractional advection–diffusion equation is a crucial type of fractional partial differential equation, widely used for its ability to more accurately describe natural phenomena. Due to the complexity of analytical approaches, this paper focuses on its numerical investigation. A lattice Boltzmann model for the spatial fractional convection–diffusion equation is developed, and an error analysis is carried out. The spatial fractional convection–diffusion equation is solved for several examples. The validity of the model is confirmed by comparing its numerical solutions with those obtained from other methods The results demonstrate that the lattice Boltzmann method is an effective tool for solving the space fractional convection–diffusion equation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
84. Quantum State Combinatorics.
- Author
-
Scholes, Gregory D.
- Subjects
QUANTUM states ,QUANTUM entanglement ,RANDOM graphs ,COMBINATORICS - Abstract
This paper concerns the analysis of large quantum states. It is a notoriously difficult problem to quantify separability of quantum states, and for large quantum states, it is unfeasible. Here we posit that when quantum states are large, we can deduce reasonable expectations for the complex structure of non-classical multipartite correlations with surprisingly little information about the state. We show, with pegagogical examples, how known results from combinatorics can be used to reveal the expected structure of various correlations hidden in the ensemble described by a state. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
85. Random Transitions of a Binary Star in the Canonical Ensemble.
- Author
-
Chavanis, Pierre-Henri
- Subjects
METASTABLE states ,FIRST-order phase transitions ,CANONICAL ensemble ,GRAVITATIONAL interactions ,THERMODYNAMIC potentials ,FOKKER-Planck equation - Abstract
After reviewing the peculiar thermodynamics and statistical mechanics of self-gravitating systems, we consider the case of a "binary star" consisting of two particles of size a in gravitational interaction in a box of radius R. The caloric curve of this system displays a region of negative specific heat in the microcanonical ensemble, which is replaced by a first-order phase transition in the canonical ensemble. The free energy viewed as a thermodynamic potential exhibits two local minima that correspond to two metastable states separated by an unstable maximum forming a barrier of potential. By introducing a Langevin equation to model the interaction of the particles with the thermal bath, we study the random transitions of the system between a "dilute" state, where the particles are well separated, and a "condensed" state, where the particles are bound together. We show that the evolution of the system is given by a Fokker–Planck equation in energy space and that the lifetime of a metastable state is given by the Kramers formula involving the barrier of free energy. This is a particular case of the theory developed in a previous paper (Chavanis, 2005) for N Brownian particles in gravitational interaction associated with the canonical ensemble. In the case of a binary star ( N = 2 ), all the quantities can be calculated exactly analytically. We compare these results with those obtained in the mean field limit N → + ∞ . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
86. Two Levels of Integrated Information Theory: From Autonomous Systems to Conscious Life.
- Author
-
Ruan, Zenan and Li, Hengwei
- Subjects
INFORMATION theory ,PSEUDOSCIENCE ,CONSCIOUSNESS ,CRITICISM - Abstract
Integrated Information Theory (IIT) is one of the most prominent candidates for a theory of consciousness, although it has received much criticism for trying to live up to expectations. Based on the relevance of three issues generalized from the developments of IITs, we have summarized the main ideas of IIT into two levels. At the second level, IIT claims to be strictly anchoring consciousness, but the first level on which it is based is more about autonomous systems or systems that have reached some other critical complexity. In this paper, we argue that the clear gap between the two levels of explanation of IIT has led to these criticisms and that its panpsychist tendency plays a crucial role in this. We suggest that the problems of IIT are far from being "pseudoscience", and by adding more necessary elements, when the first level is combined with the second level, IIT can genuinely move toward an appropriate theory of consciousness that can provide necessary and sufficient interpretations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
87. Testing the Pauli Exclusion Principle across the Periodic Table with the VIP-3 Experiment.
- Author
-
Manti, Simone, Bazzi, Massimiliano, Bortolotti, Nicola, Capoccia, Cesidio, Cargnelli, Michael, Clozza, Alberto, De Paolis, Luca, Fiorini, Carlo, Guaraldo, Carlo, Iliescu, Mihail, Laubenstein, Matthias, Marton, Johann, Napolitano, Fabrizio, Piscicchia, Kristian, Porcelli, Alessio, Scordo, Alessandro, Sgaramella, Francesco, Sirghi, Diana Laura, Sirghi, Florin, and Doce, Oton Vazquez
- Subjects
SILICON detectors ,ATOMIC transitions ,QUANTUM mechanics ,QUANTUM states ,ZIRCONIUM - Abstract
The Pauli exclusion principle (PEP), a cornerstone of quantum mechanics and whole science, states that in a system, two fermions can not simultaneously occupy the same quantum state. Several experimental tests have been performed to place increasingly stringent bounds on the validity of PEP. Among these, the series of VIP experiments, performed at the Gran Sasso Underground National Laboratory of INFN, is searching for PEP-violating atomic X-ray transitions in copper. In this paper, the upgraded VIP-3 setup is described, designed to extend these investigations to higher-Z elements such as zirconium, silver, palladium, and tin. We detail the enhanced design of this setup, including the implementation of cutting-edge, 1 mm thick, silicon drift detectors, which significantly improve the measurement sensitivity at higher energies. Additionally, we present calculations of expected PEP-violating energy shifts in the characteristic lines of these elements, performed using the multi-configurational Dirac–Fock method from first principles. The VIP-3 realization will contribute to ongoing research into PEP violation for different elements, offering new insights and directions for future studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
88. Construction of Optimal Two-Dimensional Optical Orthogonal Codes with at Most One Pulse per Wavelength.
- Author
-
Shao, Minfeng and Niu, Xianhua
- Subjects
CODE division multiple access ,ORTHOGONAL codes - Abstract
Two-dimensional optical orthogonal codes have important applications in optical code division multiple access networks. In this paper, a generic construction of two-dimensional optical orthogonal codes with at most one pulse per wavelength (AM-OPPW 2D OOCs) is proposed. As a result, some optimal AM-OPPW 2D OOCs with new parameters can be yielded. The new AM-OPPW 2D OOC may support more subscribers and heavier asynchronous traffic compared with known constructions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
89. The Evaluation of Climate Change Competitiveness via DEA Models and Shannon Entropy: EU Regions.
- Author
-
Karman, Agnieszka and Banaś, Jarosław
- Subjects
UNCERTAINTY (Information theory) ,DATA envelopment analysis ,CLIMATE change ,ENTROPY - Abstract
The purpose of this paper is to assess the efficiency of climate change competitiveness via a case study on EU regions by using the data envelopment analysis (DEA) model and Shannon entropy. First, on the same premise as similar composite indicators, we develop a DEA model to assess the relative performance of the regions in climate change competitiveness. Then, we extend our calculations with a DEA-like model and Shannon entropy to derive global estimates of a new competitiveness index by using common weights. Results show that the proposed DEA-Entropy model enables the construction of a regional climate change competitiveness index among all regions via a set of common weights. The proposed model's common weight structure demonstrates more discriminative power compared to the weights obtained through pure DEA or DEA-like methods. In order to validate the proposed DEA-Entropy model, it was applied to 120 EU regions. The results are meaningful for the regions to improve their competitiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
90. Inferring Dealer Networks in the Foreign Exchange Market Using Conditional Transfer Entropy: Analysis of a Central Bank Announcement.
- Author
-
Janczewski, Aleksander, Anagnostou, Ioannis, and Kandhai, Drona
- Subjects
FOREIGN exchange market ,INFORMATION theory ,BID price ,FINANCIAL markets ,INFORMATION networks - Abstract
The foreign exchange (FX) market has evolved into a complex system where locally generated information percolates through the dealer network via high-frequency interactions. Information related to major events, such as economic announcements, spreads rapidly through this network, potentially inducing volatility, liquidity disruptions, and contagion effects across financial markets. Yet, research on the mechanics of information flows in the FX market is limited. In this paper, we introduce a novel approach employing conditional transfer entropy to construct networks of information flows. Leveraging a unique, high-resolution dataset of bid and ask prices, we investigate the impact of an announcement by the European Central Bank on the information transfer within the market. During the announcement, we identify key dealers as information sources, conduits, and sinks, and, through comparison to a baseline, uncover shifts in the network topology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
91. Language Statistics at Different Spatial, Temporal, and Grammatical Scales.
- Author
-
Sánchez-Puig, Fernanda, Lozano-Aranda, Rogelio, Pérez-Méndez, Dante, Colman, Ewan, Morales-Guzmán, Alfredo J., Rivera Torres, Pedro Juan, Pineda, Carlos, and Gershenson, Carlos
- Subjects
LANGUAGE models ,UNIVERSAL language ,LINGUISTIC complexity ,SPANISH language ,ENGLISH language - Abstract
In recent decades, the field of statistical linguistics has made significant strides, which have been fueled by the availability of data. Leveraging Twitter data, this paper explores the English and Spanish languages, investigating their rank diversity across different scales: temporal intervals (ranging from 3 to 96 h), spatial radii (spanning 3 km to over 3000 km), and grammatical word ngrams (ranging from 1-grams to 5-grams). The analysis focuses on word ngrams, examining a time period of 1 year (2014) and eight different countries. Our findings highlight the relevance of all three scales with the most substantial changes observed at the grammatical level. Specifically, at the monogram level, rank diversity curves exhibit remarkable similarity across languages, countries, and temporal or spatial scales. However, as the grammatical scale expands, variations in rank diversity become more pronounced and influenced by temporal, spatial, linguistic, and national factors. Additionally, we investigate the statistical characteristics of Twitter-specific tokens, including emojis, hashtags, and user mentions, revealing a sigmoid pattern in their rank diversity function. These insights contribute to quantifying universal language statistics while also identifying potential sources of variation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
92. A Broken Duet: Multistable Dynamics in Dyadic Interactions.
- Author
-
Medrano, Johan and Sajid, Noor
- Subjects
LORENZ equations ,SPEECH perception ,NATIVE language ,NARRATIVES - Abstract
Misunderstandings in dyadic interactions often persist despite our best efforts, particularly between native and non-native speakers, resembling a broken duet that refuses to harmonise. This paper delves into the computational mechanisms underpinning these misunderstandings through the lens of the broken Lorenz system—a continuous dynamical model. By manipulating a specific parameter regime, we induce bistability within the Lorenz equations, thereby confining trajectories to distinct attractors based on initial conditions. This mirrors the persistence of divergent interpretations that often result in misunderstandings. Our simulations reveal that differing prior beliefs between interlocutors result in misaligned generative models, leading to stable yet divergent states of understanding when exposed to the same percept. Specifically, native speakers equipped with precise (i.e., overconfident) priors expect inputs to align closely with their internal models, thus struggling with unexpected variations. Conversely, non-native speakers with imprecise (i.e., less confident) priors exhibit a greater capacity to adjust and accommodate unforeseen inputs. Our results underscore the important role of generative models in facilitating mutual understanding (i.e., establishing a shared narrative) and highlight the necessity of accounting for multistable dynamics in dyadic interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
93. Estimation of the Impulse Response of the AWGN Channel with ISI within an Iterative Equalization and Decoding System That Uses LDPC Codes.
- Author
-
Cuc, Adriana-Maria, Morgoș, Florin Lucian, Grava, Adriana-Marcela, and Grava, Cristian
- Subjects
ADDITIVE white Gaussian noise ,IMPULSE response ,BIT error rate ,INTERSYMBOL interference ,LOW density parity check codes - Abstract
In this paper, new schemes have been proposed for the estimation of the additive white Gaussian noise (AWGN) channel with intersymbol interference (ISI) in an iterative equalization and decoding system using low-density parity check (LDPC) codes. This article explores the use of the least squares algorithm in various scenarios. For example, the impulse response of the AWGN channel h was initially estimated using a training sequence. Subsequently, the impulse response was calculated based on the training sequence and then re-estimated once using the sequence estimated from the output of the LDPC decoder. Lastly, the impulse response was calculated based on the training sequence and re-estimated twice using the sequence estimated from the output of the LDPC decoder. Comparisons were made between the performances of the three mentioned situations, with the situation in which a perfect estimate of the impulse response of the channel is assumed. The performance analysis focused on how the bit error rate changes in relation to the signal-to-noise ratio. The BER performance comes close to the scenario of having a perfect estimate of the impulse response when the estimation is performed based on the training sequence and then re-estimated twice from the sequence obtained from the output of the LDPC decoder. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
94. A Federated Adversarial Fault Diagnosis Method Driven by Fault Information Discrepancy.
- Author
-
Sun, Jiechen, Zhou, Funa, Chen, Jie, Wang, Chaoge, Hu, Xiong, and Wang, Tianzhen
- Subjects
FEDERATED learning ,FAULT diagnosis ,DIAGNOSIS methods - Abstract
Federated learning (FL) facilitates the collaborative optimization of fault diagnosis models across multiple clients. However, the performance of the global model in the federated center is contingent upon the effectiveness of the local models. Low-quality local models participating in the federation can result in negative transfer within the FL framework. Traditional regularization-based FL methods can partially mitigate the performance disparity between local models. Nevertheless, they do not adequately address the inconsistency in model optimization directions caused by variations in fault information distribution under different working conditions, thereby diminishing the applicability of the global model. This paper proposes a federated adversarial fault diagnosis method driven by fault information discrepancy (FedAdv_ID) to address the challenge of constructing an optimal global model under multiple working conditions. A consistency evaluation metric is introduced to quantify the discrepancy between local and global average fault information, guiding the federated adversarial training mechanism between clients and the federated center to minimize feature discrepancy across clients. In addition, an optimal aggregation strategy is developed based on the information discrepancies among different clients, which adaptively learns the aggregation weights and model parameters needed to reduce global feature discrepancy, ultimately yielding an optimal global model. Experiments conducted on benchmark and real-world motor-bearing datasets demonstrate that FedAdv_ID achieves a fault diagnosis accuracy of 93.09% under various motor operating conditions, outperforming model regularization-based FL methods by 17.89%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
95. Optimizing Distributions for Associated Entropic Vectors via Generative Convolutional Neural Networks.
- Author
-
Zhang, Shuhao, Liu, Nan, Kang, Wei, and Permuter, Haim
- Subjects
CONVOLUTIONAL neural networks ,RANDOM variables ,VECTOR spaces ,NEURAL codes ,ALGORITHMS - Abstract
The complete characterization of the almost-entropic region yields rate regions for network coding problems. However, this characterization is difficult and open. In this paper, we propose a novel algorithm to determine whether an arbitrary vector in the entropy space is entropic or not, by parameterizing and generating probability mass functions by neural networks. Given a target vector, the algorithm minimizes the normalized distance between the target vector and the generated entropic vector by training the neural network. The algorithm reveals the entropic nature of the target vector, and obtains the underlying distribution, accordingly. The proposed algorithm was further implemented with convolutional neural networks, which naturally fit the structure of joint probability mass functions, and accelerate the algorithm with GPUs. Empirical results demonstrate improved normalized distances and convergence performances compared with prior works. We also conducted optimizations of the Ingleton score and Ingleton violation index, where a new lower bound of the Ingleton violation index was obtained. An inner bound of the almost-entropic region with four random variables was constructed with the proposed method, presenting the current best inner bound measured by the volume ratio. The potential of a computer-aided approach to construct achievable schemes for network coding problems using the proposed method is discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
96. Precise Error Performance of BPSK Modulated Coherent Terahertz Wireless LOS Links with Pointing Errors.
- Author
-
Niu, Mingbo, Ji, Ruihang, Wang, Hucheng, and Liu, Huan
- Subjects
BIT error rate ,TELECOMMUNICATION systems ,ERROR probability ,WIRELESS communications ,ENERGY consumption ,TERAHERTZ technology - Abstract
One of the key advantages of terahertz (THz) communication is its potential for energy efficiency, making it an attractive option for green communication systems. Coherent THz transmission technology has recently been explored in the literature. However, there exist few error performance results for such a wireless link employing coherent THz technology. In this paper, we explore a comprehensive terrestrial channel model designed for wireless line-of-sight communication using THz frequencies. The performance of coherent THz links is analyzed, and it is found to be notably affected by two significant factors, atmospheric turbulence and pointing errors. These could occur between the terahertz transmitter and receiver in terrestrial links. The exact and asymptotic solutions are derived for bit error rate and interrupt probability for binary phase-shift keying coherent THz systems, respectively, over log-normal and Gamma–Gamma turbulent channels. The asymptotic outage probability analysis is also performed. It is shown that the presented results offer a precise estimation of coherent THz transmission performance and its link budget. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
97. Variational Bayesian Approximation (VBA): Implementation and Comparison of Different Optimization Algorithms.
- Author
-
Fallah Mortezanejad, Seyedeh Azadeh and Mohammad-Djafari, Ali
- Subjects
OPTIMIZATION algorithms ,PRACTICAL reason ,MARKOV chain Monte Carlo ,ALGORITHMS ,PERFORMANCE theory ,EXPONENTIAL families (Statistics) - Abstract
In any Bayesian computations, the first step is to derive the joint distribution of all the unknown variables given the observed data. Then, we have to do the computations. There are four general methods for performing computations: Joint MAP optimization; Posterior expectation computations that require integration methods; Sampling-based methods, such as MCMC, slice sampling, nested sampling, etc., for generating samples and numerically computing expectations; and finally, Variational Bayesian Approximation (VBA). In this last method, which is the focus of this paper, the objective is to search for an approximation for the joint posterior with a simpler one that allows for analytical computations. The main tool in VBA is to use the Kullback–Leibler Divergence (KLD) as a criterion to obtain that approximation. Even if, theoretically, this can be conducted formally, for practical reasons, we consider the case where the joint distribution is in the exponential family, and so is its approximation. In this case, the KLD becomes a function of the usual parameters or the natural parameters of the exponential family, where the problem becomes parametric optimization. Thus, we compare four optimization algorithms: general alternate functional optimization; parametric gradient-based with the normal and natural parameters; and the natural gradient algorithm. We then study their relative performances on three examples to demonstrate the implementation of each algorithm and their efficiency performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
98. Sample Entropy Computation on Signals with Missing Values.
- Author
-
Manis, George, Platakis, Dimitrios, and Sassi, Roberto
- Subjects
VALUE engineering ,MISSING data (Statistics) ,TIME series analysis ,INTERPOLATION algorithms ,ENTROPY - Abstract
Sample entropy embeds time series into m-dimensional spaces and estimates entropy based on the distances between points in these spaces. However, when samples can be considered as missing or invalid, defining distance in the embedding space becomes problematic. Preprocessing techniques, such as deletion or interpolation, can be employed as a solution, producing time series without missing or invalid values. While deletion ignores missing values, interpolation replaces them using approximations based on neighboring points. This paper proposes a novel approach for the computation of sample entropy when values are considered as missing or invalid. The proposed algorithm accommodates points in the m-dimensional space and handles them there. A theoretical and experimental comparison of the proposed algorithm with deletion and interpolation demonstrates several advantages over these other two approaches. Notably, the deviation of the expected sample entropy value for the proposed methodology consistently proves to be lowest one. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
99. Singular-Value-Decomposition-Based Matrix Surgery.
- Author
-
Ghafuri, Jehan and Jassim, Sabah
- Subjects
SINGULAR value decomposition ,MATRIX inversion ,DEEP learning ,POINT cloud ,IMAGE analysis - Abstract
This paper is motivated by the need to stabilise the impact of deep learning (DL) training for medical image analysis on the conditioning of convolution filters in relation to model overfitting and robustness. We present a simple strategy to reduce square matrix condition numbers and investigate its effect on the spatial distributions of point clouds of well- and ill-conditioned matrices. For a square matrix, the SVD surgery strategy works by: (1) computing its singular value decomposition (SVD), (2) changing a few of the smaller singular values relative to the largest one, and (3) reconstructing the matrix by reverse SVD. Applying SVD surgery on CNN convolution filters during training acts as spectral regularisation of the DL model without requiring the learning of extra parameters. The fact that the further away a matrix is from the non-invertible matrices, the higher its condition number is suggests that the spatial distributions of square matrices and those of their inverses are correlated to their condition number distributions. We shall examine this assertion empirically by showing that applying various versions of SVD surgery on point clouds of matrices leads to bringing their persistent diagrams (PDs) closer to the matrices of the point clouds of their inverses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
100. A History of Channel Coding in Aeronautical Mobile Telemetry and Deep-Space Telemetry.
- Author
-
Rice, Michael
- Subjects
LOW density parity check codes ,BLOCK codes ,TURBO codes ,CHANNEL coding ,TELEMETRY - Abstract
This paper presents a history of the development of channel codes in deep-space telemetry and aeronautical mobile telemetry. The history emphasizes "firsts" and other remarkable achievements. Because coding was used first in deep-space telemetry, the history begins with the codes used for Mariner and Pioneer. History continues with the international standard for concatenated coding developed for the Voyager program and the remarkable role channel coding played in rescuing the nearly-doomed Galileo mission. The history culminates with the adoption of turbo codes and LDPC codes and the programs that relied on them. The history of coding in aeronautical mobile telemetry is characterized by a number of "near misses" as channel codes were explored, sometimes tested, and rarely adopted. Aeronautical mobile telemetry is characterized by bandwidth constraints that make use of low-rate codes and their accompanying bandwidth expansion, an unattractive option. The emergence of a family of high-rate LDPC codes coupled with a bandwidth-efficient modulation has nudged the aeronautical mobile telemetry community to adopt the codes in their standards. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.