72,694 results
Search Results
2. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
3. The Space Complexity of Consensus from Swap.
- Author
-
Ovens, Sean
- Subjects
ALGORITHMS ,GENERALIZATION - Abstract
Nearly thirty years ago, it was shown that \(\Omega (\sqrt {n})\) read/write registers are needed to solve randomized wait-free consensus among n processes. This lower bound was improved to n registers in 2018, which exactly matches known algorithms. The \(\Omega (\sqrt {n})\) space complexity lower bound actually applies to a class of objects called historyless objects, which includes registers, test-and-set objects, and readable swap objects. However, every known n-process obstruction-free consensus algorithm from historyless objects uses Ω (n) objects. In this paper, we give the first Ω (n) space complexity lower bounds on consensus algorithms for two kinds of historyless objects. First, we show that any obstruction-free consensus algorithm from swap objects uses at least n-1 objects. More generally, we prove that any obstruction-free k-set agreement algorithm from swap objects uses at least \(\lceil \frac{n}{k}\rceil - 1\) objects. The k-set agreement problem is a generalization of consensus in which processes agree on no more than k different output values. This is the first non-constant lower bound on the space complexity of solving k-set agreement with swap objects when k > 1. We also present an obstruction-free k-set agreement algorithm from n-k swap objects, which exactly matches our lower bound when k=1. Second, we show that any obstruction-free binary consensus algorithm from readable swap objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. When b is a constant, this asymptotically matches the best known obstruction-free consensus algorithms from readable swap objects with unbounded domains. Since any historyless object can be simulated by a readable swap object with the same domain, our results imply that any obstruction-free consensus algorithm from historyless objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. For b = 2, we show a slightly better lower bound of n-2. There is an obstruction-free binary consensus algorithm using 2n-1 readable swap objects with domain size 2, asymptotically matching our lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Digitalized Control Algorithm of Bridgeless Totem-Pole PFC with a Simple Control Structure Based on the Phase Angle.
- Author
-
Lee, Gi-Young, Park, Hae-Chan, Ji, Min-Woo, and Kim, Rae-Young
- Subjects
ELECTRIC current rectifiers ,ELECTRONIC paper ,PHASE-locked loops ,ALGORITHMS ,ANGLES ,VOLTAGE - Abstract
Compared to the conventional boost power factor correction (PFC) converter, a totem-pole bridgeless PFC has high efficiency because it does not have an input diode rectifier stage, but a current spike may occur when the polarity of the grid voltage changes. This paper proposes a digital control algorithm for bridgeless totem-pole PFC with a simple control structure based on the phase angle of grid voltage. The proposed algorithm has a PI-based double-loop control structure and performs DC-link voltage and input inductor current control. Rectifying switches operate based on the proposed rectification algorithm using phase angle information calculated through a single-phase phase-locked loop (PLL) to prevent current spikes. The feed-forward duty ratio value is calculated according to the polarity of the grid voltage and added to the double-loop controller to perform appropriate power factor control. The performance and feasibility of the proposed control algorithm are verified through a 3 kW hardware prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Special issue "Discrete optimization: Theory, algorithms and new applications".
- Author
-
Werner, Frank
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,ONLINE algorithms ,LINEAR matrix inequalities ,ALGORITHMS ,ROBUST stability analysis ,NONLINEAR integral equations - Abstract
This document is an editorial for a special issue of the journal AIMS Mathematics on the topic of discrete optimization. The issue includes 21 papers covering a range of subjects, including molecular trees, network systems, variational inequality problems, scheduling, image restoration, spectral clustering, integral equations, convex functions, graph products, optimization algorithms, air quality prediction, humanitarian planning, inertial methods, neural networks, transportation problems, emotion identification, fixed-point problems, structural engineering design, single machine scheduling, and ensemble learning. The papers present new theoretical results, algorithms, and applications in these areas. The guest editor expresses gratitude to the journal staff and reviewers and hopes that readers will find inspiration for their own research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
6. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
8. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ELECTRONIC paper ,ATRIAL fibrillation ,ALGORITHMS ,HEART failure ,HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Research on 3D point cloud alignment algorithm based on SHOT features.
- Author
-
Fu, Zheng, Zhang, Enzhong, Sun, Ruiyang, Zang, Jiaran, and Zhang, Wei
- Subjects
POINT cloud ,ALGORITHMS ,FEATURE extraction - Abstract
To overcome the problem of the high initial position of the point cloud required by the traditional Iterative Closest Point (ICP) algorithm, in this paper, we propose a point cloud registration method based on normal vector and directional histogram features (SHOT). Firstly, a hybrid filtering method based on the voxel idea is proposed and verified using the measured point cloud data, and the noise removal rates of 97.5%, 97.8%, and 93.8% are obtained. Secondly, in terms of feature point extraction, the original algorithm is optimized, and the optimized algorithm can better extract the missing part of the point cloud. Finally, a fine alignment method based on normal vector and directional histogram features (SHOT) is proposed, and the improved algorithm is compared with the existing algorithm. Taking the Stanford University point cloud data and the self-measured point cloud data as examples, the plotted iteration-error plots can be concluded that the improved method can reduce the number of iterations by 40.23% and 37.62%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Study on tiered storage algorithm based on heat correlation of astronomical data.
- Author
-
Ye, Xin-Chen, Zhang, Hai-Long, Wang, Jie, Zhang, Ya-Zhou, Du, Xu, Wu, Han, and Riccio, Giuseppe
- Subjects
RADIO telescopes ,GEODETIC astronomy ,PULSAR detection ,ELECTRONIC data processing ,ALGORITHMS ,CLOUD storage - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five- hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Research on fabric surface defect detection algorithm based on improved Yolo_v4.
- Author
-
Li, Yuanyuan, Song, Liyuan, Cai, Yin, Fang, Zhijun, and Tang, Ming
- Subjects
SURFACE defects ,FEATURE extraction ,ALGORITHMS ,INDUSTRIAL sites ,TEXTILES ,PROBLEM solving - Abstract
In industry, the task of defect classification and defect localization is an important part of defect detection system. However, existing studies only focus on one task and it is difficult to ensure the accuracy of both tasks. This paper proposes a defect detection system based on improved Yolo_v4, which greatly improves the detection ability of minor defects. For K_Means algorithm clustering prianchors question with strong subjectivity, the paper proposes the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to determine the number of Anchors. To solve the problem of low detection rate of small targets caused by insufficient reuse rate of low-level features in CSPDarknet53 feature extraction network, this paper proposes an ECA-DenseNet-BC-121 feature extraction network to improve it. And the Dual Channel Feature Enhancement (DCFE) module is proposed to improve the local information loss and gradient propagation obstruction caused by quad chain convolution in PANet networks to improve the robustness of the model. The experimental results on the fabric surface defect detection datasets show that the mAP of the improved Yolo_v4 is 98.97%, which is 7.67% higher than SSD, 3.75% higher than Faster_RCNN, 10.82% higher than Yolo_v4 tiny, and 5.35% higher than Yolo_v4, and the detection speed reaches 39.4 fps. It can meet the real-time monitoring needs of industrial sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Face Verification Algorithms for UAV Applications: An Empirical Comparative Analysis.
- Author
-
Diez-Tomillo, Julio, Alcaraz-Calero, Jose M., and Qi Wang
- Subjects
RESCUE work ,ALGORITHMS ,PUBLIC safety ,COMPUTER vision ,PUBLIC administration ,DRONE aircraft - Abstract
Unmanned Aerial Vehicles (UAVs) are revolutionising diverse computer vision use case domains, from public safety surveillance to Search and Rescue (SAR), and other emergency management and disaster relief operations. The growing need for accurate face verification algorithms has prompted an exploration of synergies between UAVs and face verification. This promises cost-effective, wide-area, non-intrusive person verification. Real-world human-centric use cases such as a ”Drone Guard Angel” for vulnerable people can contribute to public safety management and offload significant police resources. These scenarios demand efficient face verification to distinguish correctly the end users for authentication, authorisation and customised services. This paper investigates the suitability of existing solutions, and analyses five state-of-the-art candidate face verification algorithms. Informed by the advantages and disadvantages of existing solutions, the paper proposes an extended dataset and a refined face verification pipeline. Subsequently, it conducts empirical evaluation of these algorithms using the proposed pipeline and dataset in terms of inference times and the distribution of the similarity indexes. Furthermore, this paper provides essential guidance for algorithm selection and deployment in UAV-based applications. Two candidate algorithms, ArcFace and FaceNet512, have emerged as the top performers. The choice between them will depend on the specific use case requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Combining Improved Meanshift and Adaptive Shi-Tomasi Algorithms for a Photovoltaic Panel Segmentation Strategy.
- Author
-
Huang, Chao, Chao, Xuewei, Zhou, Weiji, and Gong, Lijiao
- Subjects
IMAGE segmentation ,ALGORITHMS - Abstract
To achieve effective and accurate segmentation of photovoltaic panels in various working contexts, this paper proposes a comprehensive image segmentation strategy that integrates an improved Meanshift algorithm and an adaptive Shi-Tomasi algorithm. This approach effectively addresses the challenge of low precision in segmenting target regions and boundary contours in routine photovoltaic panel inspection. Firstly, based on the image information of photovoltaic panels collected under different environments by cameras, an improved Meanshift algorithm based on platform histogram optimization is used for preliminary processing, and images containing target information are cut out; then, the adaptive Shi-Tomasi algorithm is used to extract and screen feature points from the target area; finally, the extracted feature points generate the segmentation contour of the target photovoltaic panel, achieving accurate segmentation of the target area and boundary contour of the photovoltaic panel. Experiments verified that in photovoltaic panel images under different background environments, the method proposed in this paper enhances the accuracy of segmenting the target area and boundary contour of photovoltaic panels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Maneuvering Decision Making Based on Cloud Modeling Algorithm for UAV Evasion–Pursuit Game.
- Author
-
Huang, Hanqiao, Weng, Weiye, Zhou, Huan, Jiang, Zijian, and Dong, Yue
- Subjects
MANEUVERING boards ,DECISION making ,DRONE aircraft ,ALGORITHMS - Abstract
When facing problems in the aerial pursuit game, most of the current unmanned aerial vehicles (UAVs) have good maneuverability performance, but it is difficult to utilize the overload maneuverability of UAVs properly; further, UAVs tend to be more costly, and it is often difficult to effectively prevent the enemy from reaching the tailgating position behind the UAV in the aerial pursuit game. Therefore, there is a pressing need for a maneuvering algorithm that can effectively allow a UAV to quickly protect itself in a disadvantageous position, stably and effectively select a maneuver with the maneuvering algorithm, and stably and effectively establish an advantage by moving to an advantageous position. Therefore, this paper establishes a cloud model-based UAV-maneuvering aerial pursuit decision-making model based on pursuit-and-evasion game positions. Based on the evaluation of the latter, when the UAV is at a disadvantage, we use the constructed defensive maneuver expert pool to abandon the disadvantageous position. When the UAV is at an advantage, we use cloud model-based pursuit-and-evasion game maneuvering decision making to establish an advantageous position. According to the results of the simulation examples, the maneuvering decision-making method designed in this paper confirms that the UAV can quickly abandon its position and establish an advantage in case of parity or disadvantage and that it can also stably establish a tail-chasing position in case of advantage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Fast Extraction Algorithm of Planar Targets Based on Point Cloud Data for Monitoring the Synchronization of Bridge Jacking Displacements.
- Author
-
Liang, Dong, Zhang, Zeyu, Zhang, Qiang, Wu, Erpeng, and Huang, Haibin
- Subjects
POINT cloud ,SYNCHRONIZATION ,CLOUD storage ,ALGORITHMS ,BRIDGES ,STRUCTURAL health monitoring - Abstract
Transverse synchronization of vertical displacements of all jacking-up points is an important monitoring indicator to replace bearings in assembled multigirder bridges during the jacking phase. Currently, using target paper to identify the 3D coordinates of control points reduces the complexity of monitoring operations and improves the stability of data precision. However, the existing planar target locating methods have low accuracy, inefficiency, and subjectivity, which seriously hinders the construction process of bearing replacement. Accurately obtaining the center coordinates of multiple targets from a point cloud in a short monitoring period remains a challenge. This study proposes a high-precision automated algorithm to extract target center points in low-density point clouds to quickly calculate real target center points. First, we construct a standard point cloud model of the target papers for scanning, including color and geometric features. Then, we extract the measured point cloud of the typical jacking operation phase based on the reflection intensity and size information. Next, we map the intensity values of the measured point cloud into the color channel and register the measured point cloud with its standard point cloud model using the normal vector estimation and colored ICP algorithms. Finally, we extract the center point of the measured targets. Numerical experiments and engineering test results show that the proposed method converges quickly with high precision and good robustness, which saves 91.4% of the time compared with the traditional method. In general, this research can provide effective technical support for 3D laser scanning in monitoring the operation phase of bridge jacking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A lightweight license plate detection algorithm based on deep learning.
- Author
-
Zhu, Shuo, Wang, Yu, and Wang, Zongyang
- Subjects
AUTOMOBILE license plates ,DEEP learning ,INTELLIGENT transportation systems ,TRAFFIC engineering ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
License plate detection is an important task in Intelligent Transportation Systems (ITS) and has a wide range of applications in vehicle management, traffic control, and public safety. In order to improve the accuracy and speed of mobile recognition, an improved lightweight YOLOv5s model is proposed for license plate detection. First, an improved Stemblock network is used to replace the original Focus layer in the network, which ensures strong feature expression capability and reduces a large number of parameters to lower the computational complexity; then, an improved lightweight network, ShuffleNetv2, is used to replace the backbone network of the YOLOv5s, which makes the model lighter and ensures the detection accuracy at the same time. Then, a feature enhancement module is designed to reduce the information loss caused by the rearrangement of the backbone network channels, which facilitates the information interaction in the feature fusion process; finally, the low‐, medium‐ and high‐level features in the Shufflenetv2 network structure are fused to form the final high‐level output features. Experimental results on the CCPD dataset show that compared to other methods this paper obtains better performance and faster speed in the license plate detection task, in which the average precision mean value reaches 96.6%, and can achieve a detection speed of 43.86 frame/s, and the parameter volume is reduced to 5.07 M. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Utilizing tables, figures, charts and graphs to enhance the readability of a research paper.
- Author
-
Divecha C. A., Tullu M. S., and Karande S.
- Subjects
GRAPHIC arts ,READABILITY (Literary style) ,SERIAL publications ,RESEARCH methodology ,COPYRIGHT ,MEDICAL research ,ALGORITHMS - Abstract
The authors offer observation on utilizing tables, figures, charts and graphs to help understand the research presented in a simple manner but also engage and sustain the reader's interest. Topics discussed include benefits provided by the use of tables/figures/charts/graphs, general methodology of design and submission, and copyright issues of using material from government publications/public domain.
- Published
- 2023
- Full Text
- View/download PDF
18. Fast Decision-Tree-Based Series Partitioning and Mode Prediction Termination Algorithm for H.266/VVC.
- Author
-
Li, Ye, He, Zhihao, and Zhang, Qiuwen
- Subjects
VIDEO compression ,VIDEO coding ,TECHNOLOGICAL innovations ,ALGORITHMS ,MULTIMEDIA systems ,PARALLEL algorithms ,COMPUTATIONAL complexity ,DECISION trees ,RANDOM forest algorithms - Abstract
With the advancement of network technology, multimedia videos have emerged as a crucial channel for individuals to access external information, owing to their realistic and intuitive effects. In the presence of high frame rate and high dynamic range videos, the coding efficiency of high-efficiency video coding (HEVC) falls short of meeting the storage and transmission demands of the video content. Therefore, versatile video coding (VVC) introduces a nested quadtree plus multi-type tree (QTMT) segmentation structure based on the HEVC standard, while also expanding the intra-prediction modes from 35 to 67. While the new technology introduced by VVC has enhanced compression performance, it concurrently introduces a higher level of computational complexity. To enhance coding efficiency and diminish computational complexity, this paper explores two key aspects: coding unit (CU) partition decision-making and intra-frame mode selection. Firstly, to address the flexible partitioning structure of QTMT, we propose a decision-tree-based series partitioning decision algorithm for partitioning decisions. Through concatenating the quadtree (QT) partition division decision with the multi-type tree (MT) division decision, a strategy is implemented to determine whether to skip the MT division decision based on texture characteristics. If the MT partition decision is used, four decision tree classifiers are used to judge different partition types. Secondly, for intra-frame mode selection, this paper proposes an ensemble-learning-based algorithm for mode prediction termination. Through the reordering of complete candidate modes and the assessment of prediction accuracy, the termination of redundant candidate modes is accomplished. Experimental results show that compared with the VVC test model (VTM), the algorithm proposed in this paper achieves an average time saving of 54.74%, while the BDBR only increases by 1.61%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Global Maximum Power Point Tracking of Photovoltaic Module Arrays Based on an Improved Intelligent Bat Algorithm.
- Author
-
Chao, Kuei-Hsiang and Bau, Thi Thanh Truc
- Subjects
MAXIMUM power point trackers ,ALGORITHMS ,CLIMATE change ,VOLTAGE - Abstract
In this paper, a method based on an improved intelligent bat algorithm (IIBA) in cooperation with a voltage and current sensor was applied in maximum power point tracking (MPPT) for a photovoltaic module array (PVMA), where the power generation performance of a PVMA was enhanced. Due to the partial shading of the PVMA from climate changes or the surrounding environment, multiple peak values were generated on the power–voltage (P-V) curve, where the conventional MPPT technology could only track the local maximum power point (LMPP), hence the reduction in output power of PVMAs. Therefore, the IIBA-based MPPT was proposed in this paper to solve such issues and to ensure the capability of a PVMA in tracking the global maximum power point (GMPP) and utilization for enhancing the output power of a PVMA. Firstly, the Matlab/Simulink software was used to establish a boost converter model that simulated the actual 4-series–3-parallel PVMA under different shaded conditions, where the P-V curve with 1-peak, 2-peak, 3-peak and 4-peak values were generated. Subsequently, the tracking paces of the conventional bat algorithm (BA) were adjusted according to the gradient of the P-V curve for a PVMA. At the same time, 0.8 times the maximum power point (MPP) voltage V
mp under standard test conditions (STCs) for a PVMA was set as the initial tracking voltage. Lastly, the simulation results proved that under different environmental impacts, the proposed IIBA led to better performances in tracking both dynamic and steady responses. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
20. Partial Discharge Signal Denoising Algorithm Based on Aquila Optimizer–Variational Mode Decomposition and K-Singular Value Decomposition.
- Author
-
Zhong, Jun, Liu, Zhenyu, and Bi, Xiaowen
- Subjects
SIGNAL denoising ,PARTIAL discharges ,HILBERT-Huang transform ,ELECTRIC insulators & insulation ,ALGORITHMS - Abstract
Partial discharge (PD) is a primary factor leading to the deterioration of insulation in electrical equipment. However, it is hard for traditional methods to precisely extract PD signals in increasingly complex engineering environments. This paper proposes a new PD signal denoising method combining Aquila Optimizer–Variational Mode Decomposition (AO-VMD) and K-Singular Value Decomposition (K-SVD) algorithms. Firstly, the AO algorithm optimizes critical parameters of the VMD algorithm. For the PD signal overwhelmed by noise, the AO-VMD algorithm can decompose it and reconstruct it by using kurtosis. In this process, the majority of the noise is removed, and the characteristics of the original signal are shown. Subsequently, the K-SVD algorithm performs sparse decomposition on the signal after OA-VMD, constructs a learned dictionary, and captures the characteristics of the signal for continuous learning and updating. After the dictionary learning is completed, the best matching atoms from the dictionary are selected to precisely reconstruct the original noiseless signal. Finally, the proposed method is compared with three traditional algorithms, Adaptive Ensemble Empirical Mode Decomposition (AEEMD), SVD-VMD, and the Adaptive Wavelet Multilevel Soft Threshold algorithm, on the simulated signal and the actual engineering signal. The results both demonstrate that the algorithm proposed by this paper has superior noise reduction and signal extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A Novel IDS with a Dynamic Access Control Algorithm to Detect and Defend Intrusion at IoT Nodes.
- Author
-
Alazab, Moutaz, Awajan, Albara, Alazzam, Hadeel, Wedyan, Mohammad, Alshawi, Bandar, and Alturki, Ryan
- Subjects
INTRUSION detection systems (Computer security) ,ACCESS control ,INTERNET of things ,ALGORITHMS ,FALSE alarms ,MATHEMATICAL analysis - Abstract
The Internet of Things (IoT) is the underlying technology that has enabled connecting daily apparatus to the Internet and enjoying the facilities of smart services. IoT marketing is experiencing an impressive 16.7% growth rate and is a nearly USD 300.3 billion market. These eye-catching figures have made it an attractive playground for cybercriminals. IoT devices are built using resource-constrained architecture to offer compact sizes and competitive prices. As a result, integrating sophisticated cybersecurity features is beyond the scope of the computational capabilities of IoT. All of these have contributed to a surge in IoT intrusion. This paper presents an LSTM-based Intrusion Detection System (IDS) with a Dynamic Access Control (DAC) algorithm that not only detects but also defends against intrusion. This novel approach has achieved an impressive 97.16% validation accuracy. Unlike most of the IDSs, the model of the proposed IDS has been selected and optimized through mathematical analysis. Additionally, it boasts the ability to identify a wider range of threats (14 to be exact) compared to other IDS solutions, translating to enhanced security. Furthermore, it has been fine-tuned to strike a balance between accurately flagging threats and minimizing false alarms. Its impressive performance metrics (precision, recall, and F1 score all hovering around 97%) showcase the potential of this innovative IDS to elevate IoT security. The proposed IDS boasts an impressive detection rate, exceeding 98%. This high accuracy instills confidence in its reliability. Furthermore, its lightning-fast response time, averaging under 1.2 s, positions it among the fastest intrusion detection systems available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
DATA management ,INTERNET of things ,NETWORK performance ,BLOCKCHAINS ,SCALABILITY ,ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
TRACKING algorithms ,FILTERS & filtration ,VIDEO surveillance ,ALGORITHMS ,IMAGE segmentation ,RESEARCH personnel ,JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A flocking control algorithm of multi-agent systems based on cohesion of the potential function.
- Author
-
Li, Chenyang, Yang, Yonghui, Jiang, Guanjie, and Chen, Xue-Bo
- Subjects
COHESION ,POTENTIAL functions ,MULTIAGENT systems ,SOCIAL distance ,SOCIAL cohesion ,ALGORITHMS ,CHANGE agents - Abstract
Flocking cohesion is critical for maintaining a group's aggregation and integrity. Designing a potential function to maintain flocking cohesion unaffected by social distance is challenging due to the uncertainty of real-world conditions and environments that cause changes in agents' social distance. Previous flocking research based on potential functions has primarily focused on agents' same social distance and the attraction–repulsion of the potential function, ignoring another property affecting flocking cohesion: well depth, as well as the effect of changes in agents' social distance on well depth. This paper investigates the effect of potential function well depths and agent's social distances on the multi-agent flocking cohesion. Through the analysis, proofs, and classification of these potential functions, we have found that the potential function well depth is proportional to the flocking cohesion. Moreover, we observe that the potential function well depth varies with the agents' social distance changes. Therefore, we design a segmentation potential function and combine it with the flocking control algorithm in this paper. It enhances flocking cohesion significantly and has good robustness to ensure the flocking cohesion is unaffected by variations in the agents' social distance. Meanwhile, it reduces the time required for flocking formation. Subsequently, the Lyapunov theorem and the LaSalle invariance principle prove the stability and convergence of the proposed control algorithm. Finally, this paper adopts two subgroups with different potential function well depths and social distances to encounter for simulation verification. The corresponding simulation results demonstrate and verify the effectiveness of the flocking control algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. An Improved Evolutionary Multi-Objective Clustering Algorithm Based on Autoencoder.
- Author
-
Qiu, Mingxin, Zhang, Yingyao, Lei, Shuai, and Gu, Miaosong
- Subjects
ALGORITHMS ,EVOLUTIONARY algorithms ,DEEP learning - Abstract
Evolutionary multi-objective clustering (EMOC) algorithms have gained popularity recently, as they can obtain a set of clustering solutions in a single run by optimizing multiple objectives. Particularly, in one type of EMOC algorithm, the number of clusters k is taken as one of the multiple objectives to obtain a set of clustering solutions with different k. However, the numbers of clusters k and other objectives are not always in conflict, so it is impossible to obtain the clustering solutions with all different k in a single run. Therefore, evolutionary multi-objective k-clustering (EMO-KC) has recently been proposed to ensure this conflict. However, EMO-KC could not obtain good clustering accuracy on high-dimensional datasets. Moreover, EMO-KC's validity is not ensured as one of its objectives (SSD
exp , which is transformed from the sum of squared distances (SSD)) could not be effectively optimized and it could not avoid invalid solutions in its initialization. In this paper, an improved evolutionary multi-objective clustering algorithm based on autoencoder (AE-IEMOKC) is proposed to improve the accuracy and ensure the validity of EMO-KC. The proposed AE-IEMOKC is established by combining an autoencoder with an improved version of EMO-KC (IEMO-KC) for better accuracy, where IEMO-KC is improved based on EMO-KC by proposing a scaling factor to help effectively optimize the objective of SSDexp and introducing a valid initialization to avoid the invalid solutions. Experimental results on several datasets demonstrate the accuracy and validity of AE-IEMOKC. The results of this paper may provide some useful information for other EMOC algorithms to improve accuracy and convergence. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
27. Study on Relay Contact Bounce Based on the Adaptive Weight Rotation Template Matching Algorithm.
- Author
-
Zhao, Wenze, Yan, Jiaxing, Wang, Xin, Li, Wenhua, Yang, Xinglin, and Wang, Weiming
- Subjects
KINETIC energy ,ROTATIONAL motion ,CONTACT angle ,ALGORITHMS ,IMAGE processing ,ANGLES - Abstract
In order to analyze the relay action process from an imaging perspective and further investigate the bounce phenomenon of relay contacts during the contact process, this paper utilizes a high-speed shooting platform to capture images of relay action. In light of the situation where the stationary contact in the image is inclined and continuously changing, a rotation template matching algorithm based on adaptive weight is proposed. The algorithm identifies and obtains the inclination angle of the stationary contact, enabling the study of the relay contact bounce process. By extracting contact bounce distance data from the images, a bounce process curve is plotted. Combined with the analysis of the contact bounce process, the reasons for the bounce are explored. The results indicate that the proposed rotation template matching algorithm can accurately identify stationary contacts and their angles at different angles. By analyzing the contact status and bounce process of the relay contacts in conjunction with the relay structure, parameters such as the bounce time, bounce height, and time required to reach the maximum distance can be calculated. Additionally, the main reason for contact bounce in the relay studied in this paper is the limitation imposed on the continued movement of the stationary contact by the presence of the relay brackets when the kinetic energy of the contact is too high. This phenomenon occurs during the first vibration peak in the vibration process after the moving contact contacts the stationary contact. The research results provide a reference for further studying the relay contact bounce process, optimizing relay structure, and suppressing contact bounce. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Algorithms for Liver Segmentation in Computed Tomography Scans: A Historical Perspective.
- Author
-
Niño, Stephanie Batista, Bernardino, Jorge, and Domingues, Inês
- Subjects
COMPUTED tomography ,IMAGE processing ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE reconstruction algorithms - Abstract
Oncology has emerged as a crucial field of study in the domain of medicine. Computed tomography has gained widespread adoption as a radiological modality for the identification and characterisation of pathologies, particularly in oncology, enabling precise identification of affected organs and tissues. However, achieving accurate liver segmentation in computed tomography scans remains a challenge due to the presence of artefacts and the varying densities of soft tissues and adjacent organs. This paper compares artificial intelligence algorithms and traditional medical image processing techniques to assist radiologists in liver segmentation in computed tomography scans and evaluates their accuracy and efficiency. Despite notable progress in the field, the limited availability of public datasets remains a significant barrier to broad participation in research studies and replication of methodologies. Future directions should focus on increasing the accessibility of public datasets, establishing standardised evaluation metrics, and advancing the development of three-dimensional segmentation techniques. In addition, maintaining a collaborative relationship between technological advances and medical expertise is essential to ensure that these innovations not only achieve technical accuracy, but also remain aligned with clinical needs and realities. This synergy ensures their applicability and effectiveness in real-world healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Dynamic phasor measurement algorithm based on high-precision time synchronization.
- Author
-
Jie Zhang, Fuxin Li, Zhengwei Chang, Chunhua Hu, Chun Liu, and Sihao Tang
- Subjects
PHASOR measurement ,COVARIANCE matrices ,ELECTRIC power ,ELECTRIC power distribution grids ,SYNCHRONIZATION ,ALGORITHMS ,KALMAN filtering - Abstract
Ensuring the swift and precise tracking of power system signal parameters, especially the frequency, is imperative for the secure and stable operation of power grids. In instances of faults within the distribution network, abrupt changes in frequency may occur, presenting a challenge for existing algorithms that struggle to effectively track such signal variations. Addressing the need for enhanced performance in the face of frequency mutations, this paper introduces an innovative approach--the Covariance Reconstruction Extended Kalman Filter (CREKF) algorithm. Initially, the dynamic signal model of electric power is meticulously analyzed, establishing a dynamic signal relationship based on high-precision time source sampling tailored to the signal model's characteristics. Subsequently, the filter gain, covariance matrix, and variance iteration equation are determined based on the signal relationship among three sampling points. In a final step, recognizing the impact of the covariance matrix on algorithmic tracking ability, the paper proposes a covariance matrix reset mechanism utilizing hysteresis induced by output errors. Through extensive verification with simulated signals, the results conclusively demonstrate that the CREKF algorithm exhibits superior measurement accuracy and accelerated tracking speed when confronted with mutating signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Research on WSN reliable ranging and positioning algorithm for forest environment.
- Author
-
Wu, Peng, Yu, Le, Yi, Xiaomei, Xu, Liang, Liu, LiJuan, Yi, YuTong, Jiang, Tengteng, and Tao, Chunling
- Subjects
WIRELESS sensor networks ,ALGORITHMS - Abstract
Wireless sensor network (WSN) location is a significant research area. In complex environments like forests, inaccurate signal intensity ranging is a major challenge. To address this issue, this paper presents a reliable WSN distance measurement-positioning algorithm for forest environments. The algorithm divides the positioning area into several sub-regions based on the discrete coefficient of the collected signal strength. Then, using the fitting method based on the signal intensity value of each sub-region, the algorithm derives the reference points of the logarithmic distance path loss model and path loss index. Finally, the algorithm locates target nodes using anchor nodes in different regions. Additionally, to enhance the positioning accuracy, weight values are assigned to the positioning result based on the discrete coefficient of the signal intensity in each sub-region. Experimental results demonstrate that the proposed WSN algorithm has high precision in forest environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Segmented Hybrid Algorithm for Beam Shaping Combining Iterative and Simulated Annealing Approaches.
- Author
-
Zhang, Xiaoyu, Zhang, Qi, and Chen, Genxiang
- Subjects
SIMULATED annealing ,STANDARD deviations ,ALGORITHMS ,OPTICAL communications ,LASER beams - Abstract
In recent years, laser technology has made significant advancements, yet there are specific requirements for the energy concentration and uniformity of lasers in various fields, such as optical communication, laser processing, 3D printing, etc. Beam shaping technology enables the transformation of ordinary Gaussian-distributed laser beams into square or circular flat-top uniform beams. Currently, LCOS-based beam shaping algorithms do not adequately meet these requirements, and most of these algorithms do not simultaneously consider the impact of phase quantization and zero-padding, leading to a decrease in the practicality of phase holograms. To address these issues, this paper proposes a novel segmented beam shaping algorithm that combines iterative and simulated annealing approaches. This paper validated the reliability of the proposed algorithm through numerical simulations. Compared to other algorithms, the proposed algorithm can effectively reduce the root mean square error by an average of nearly 37% and decrease the uniformity error by almost 39% without a significant decrease in diffraction efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
33. Multicore Parallelized Spatial Overlay Analysis Algorithm Using Vector Polygon Shape Complexity Index Optimization.
- Author
-
Fan, Junfu, Zuo, Jiwei, Sun, Guangwei, Shi, Zongwen, Gao, Yu, and Zhang, Yi
- Subjects
PARALLEL algorithms ,DIFFERENCE operators ,POLYGONS ,ALGORITHMS ,PARALLEL programming ,VECTOR data - Abstract
As core algorithms of geographic computing, overlay analysis algorithms typically have computation-intensive and data-intensive characteristics. It is highly important to optimize overlay analysis algorithms by parallelizing the vector polygons after reasonable data division. To address the problem of unbalanced data partitioning in the task decomposition process for parallel polygon overlay analysis and calculation, this paper presents a data partitioning method based on shape complexity index optimization, which achieves data equalization among multicore parallel computing tasks. Taking the intersection operator and difference operator of the Vatti algorithm as examples, six polygon shape indexes are selected to construct the shape complexity model, and the vector data are divided in accordance with the calculated shape complexity results. Finally, multicore parallelism is achieved based on OpenMP. The experimental results show that when a data set with a large amount of data is used, the effect of the multicore parallel execution of the Vatti algorithm's intersection operator and difference operator based on shape complexity division is clearly improved. With 16 threads, compared with the serial algorithm, speedups of 29 times and 32 times can be obtained. Compared with the traditional multicore parallel algorithm based on polygon number division, the speed can be improved by 33% and 29%, and the load balancing index is reduced. For a data set with a small amount of data, the acceleration effect of this method is similar to that of traditional methods involving multicore parallelism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Weather Radar High-Resolution Spectral Moment Estimation Using Bidirectional Extreme Learning Machine.
- Author
-
Zhongyuan Wang, Ling Qiao, Yu Jiang, Mingwei Shen, and Guodong Han
- Subjects
MACHINE learning ,POWER spectra ,RADAR meteorology ,PROBLEM solving ,ALGORITHMS - Abstract
Since the performance of the spectral moment estimation algorithm commonly used in engineering degrades under the conditions of low SNR, this paper introduces the Extreme Learning Machine (ELM) to the spectral moment estimation of weather signals based on the correlation of the signals of adjacent range cells. To solve the problem that the hidden layer nodes of ELM algorithm are difficult to be determined, the Bidirectional Extreme Learning Machine (B-ELM) algorithm is applied to achieve the high resolution of spectral moments. Firstly, to improve the SNR of the training samples, time-domain pulse signals are converted into weather power spectrum by Welch method. Then, the parameters of the B-ELM hidden layer nodes are directly calculated by backpropagation of network residuals. The model parameters are optimized according to the least-squares solution, where the optimal number of hidden layer nodes is determined adaptively. Finally, the optimized B-ELM model is employed for the spectral moment estimation of weather signals. The algorithm is validated to be fast and accurate for spectral moment estimation using the measured IDRA weather radar data and is easy to implement in engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Research on Microgrid Optimal Dispatching Based on a Multi-Strategy Optimization of Slime Mould Algorithm.
- Author
-
Zhang, Yi and Zhou, Yangkun
- Subjects
MICROGRIDS ,ELECTRIC power distribution grids ,SWARM intelligence ,ENERGY consumption ,WIND power ,ALGORITHMS - Abstract
In order to cope with the problems of energy shortage and environmental pollution, carbon emissions need to be reduced and so the structure of the power grid is constantly being optimized. Traditional centralized power networks are not as capable of controlling and distributing non-renewable energy as distributed power grids. Therefore, the optimal dispatch of microgrids faces increasing challenges. This paper proposes a multi-strategy fusion slime mould algorithm (MFSMA) to tackle the microgrid optimal dispatching problem. Traditional swarm intelligence algorithms suffer from slow convergence, low efficiency, and the risk of falling into local optima. The MFSMA employs reverse learning to enlarge the search space and avoid local optima to overcome these challenges. Furthermore, adaptive parameters ensure a thorough search during the algorithm iterations. The focus is on exploring the solution space in the early stages of the algorithm, while convergence is accelerated during the later stages to ensure efficiency and accuracy. The salp swarm algorithm's search mode is also incorporated to expedite convergence. MFSMA and other algorithms are compared on the benchmark functions, and the test showed that the effect of MFSMA is better. Simulation results demonstrate the superior performance of the MFSMA for function optimization, particularly in solving the 24 h microgrid optimal scheduling problem. This problem considers multiple energy sources such as wind turbines, photovoltaics, and energy storage. A microgrid model based on the MFSMA is established in this paper. Simulation of the proposed algorithm reveals its ability to enhance energy utilization efficiency, reduce total network costs, and minimize environmental pollution. The contributions of this paper are as follows: (1) A comprehensive microgrid dispatch model is proposed. (2) Environmental costs, operation and maintenance costs are taken into consideration. (3) Two modes of grid-tied operation and island operation are considered. (4) This paper uses a multi-strategy optimized slime mould algorithm to optimize scheduling, and the algorithm has excellent results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Survey on Machine Learning Biases and Mitigation Techniques.
- Author
-
Siddique, Sunzida, Haque, Mohd Ariful, George, Roy, Gupta, Kishor Datta, Gupta, Debashis, and Faruk, Md Jobair Hossain
- Subjects
MACHINE learning ,ALGORITHMS ,POLICY sciences ,BIAS (Law) ,MACHINE theory - Abstract
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Research on 3D Face Reconstruction Algorithm Based on ResNet and Transformer.
- Author
-
Yaermaimaiti, Yilihamu, Yan, Tianxing, Zhao, Yuhang, and Kari, Tusongjiang
- Subjects
TRANSFORMER models ,ALGORITHMS ,FEATURE extraction - Abstract
In view of the problems of high production cost, scarcity and lack of diversity of 3D face datasets, this paper designs an end-to-end self-supervised learning 3D face reconstruction algorithm with a single 2D face image as input, which only uses 2D face datasets to complete model training. First, the improved ResNet module is introduced to preprocess the input face image. The deep residual neural network has strong feature extraction and characterization ability for the image, which can provide rich high-level semantic feature maps for the subsequent subnetwork. Then, add transformer module completely based on self-attention mechanism to the parameter prediction subnetwork, which can make different parameters of the subnetwork focus on self-related feature map information and avoid interference from invalid feature map information, so as to further improve the parameter prediction accuracy of the subnetwork. Next, training, ablation and comparison experiments were conducted on CelebA, BFM and Photoface datasets, and the combined function of pixel loss function and perceptual loss function was selected as the loss function. The experimental results show that: compared with the historical optimal results of the same network structure, the scale-invariant depth error (SIDE) and mean angle deviation (MAD) are improved by 5.9% and 10.8%, respectively, which strongly proves the effectiveness of the algorithm. Finally, in order to verify the actual effect of the 3D face reconstruction algorithm, examples are selected in this paper for reconstruction. The 3D faces generated by the algorithm all have a good sense of reality, which intuitively and effectively proves the advancement of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A Dynamic Task Scheduling Algorithm for Airborne Device Clouds.
- Author
-
Deng, Bao and Zhai, Zhengjun
- Subjects
ALGORITHMS ,GENETIC algorithms ,SCHEDULING ,GENETIC models ,DATA transmission systems ,CLOUD computing ,WIRELESS Internet - Abstract
The rapid development of mobile Internet has promoted the rapid rise of cloud computing technology. Mobile terminal devices have greatly expanded the service capacity of mobile terminals by migrating complex computing tasks to run in the cloud. However, in the process of data exchange between mobile terminals and cloud computing centers, on the one hand, it consumes the limited power of mobile terminals, and on the other hand, it results in longer communication time, which negatively affects user QoE. Mobile cloud can effectively improve user QoE by shortening the data transmission distance, reducing the power consumption, and shortening the communication time at the same time. In this paper, we utilize the property that genetic algorithm can perform global search seeking the global optimal solution and construct a dynamic task scheduling model by combining the device-cloud link. The task scheduling model based on genetic algorithm and random scheduling algorithm is compared through comparison experiments, which show that the assignment time of the task scheduling model based on genetic algorithm is shortened by 11.82% to 48.51% and the energy consumption is reduced by 22.28% to 47.52% under different load conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. LM-DeeplabV3+: A Lightweight Image Segmentation Algorithm Based on Multi-Scale Feature Interaction.
- Author
-
Hou, Xinyu, Chen, Peng, and Gu, Haishuo
- Subjects
IMAGE segmentation ,DEEP learning ,COMPUTER vision ,ALGORITHMS - Abstract
Street-view images can help us to better understand the city environment and its potential characteristics. With the development of computer vision and deep learning, the technology of semantic segmentation algorithms has become more mature. However, DeeplabV3+, which is commonly used in semantic segmentation, has shortcomings such as a large number of parameters, high requirements for computing resources, and easy loss of detailed information. Therefore, this paper proposes LM-DeeplabV3+, which aims to greatly reduce the parameters and computations of the model while ensuring segmentation accuracy. Firstly, the lightweight network MobileNetV2 is selected as the backbone network, and the ECA attention mechanism is introduced after MobileNetV2 extracts shallow features to improve the ability of feature representation; secondly, the ASPP module is improved, and on this basis, the EPSA attention mechanism is introduced to achieve cross-dimensional channel attention and important feature interaction; thirdly, a loss function named CL loss is designed to balance the training offset of multiple categories and better indicate the segmentation quality. This paper conducted experimental verification on the Cityspaces dataset, and the results showed that the mIoU reached 74.9%, which was an improvement of 3.56% compared to DeeplabV3+; and the mPA reached 83.01%, which was an improvement of 2.53% compared to DeeplabV3+. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Remote Sensing Image Target Detection Algorithm Based on Improved YOLOv8.
- Author
-
Wang, Haoyu, Yang, Haitao, Chen, Hang, Wang, Jinyu, Zhou, Xixuan, and Xu, Yifan
- Subjects
REMOTE sensing ,ALGORITHMS ,REMOTE-sensing images - Abstract
Aiming at the characteristics of remote sensing images such as a complex background, a large number of small targets, and various target scales, this paper presents a remote sensing image target detection algorithm based on improved YOLOv8. First, in order to extract more information about small targets in images, we add an extra detection layer for small targets in the backbone network; second, we propose a C2f-E structure based on the Efficient Multi-Scale Attention Module (EMA) to enhance the network's ability to detect targets of different sizes; and lastly, Wise-IoU is used to replace the CIoU loss function in the original algorithm to improve the robustness of the model. Using our improved algorithm for the detection of multiple target categories in the DOTAv1.0 dataset, the mAP@0.5 value is 82.7%, which is 1.3% higher than that of the original YOLOv8 algorithm. It is proven that the algorithm proposed in this paper can effectively improve target detection accuracy in remote sensing images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. GS-AGC: An Adaptive Glare Suppression Algorithm Based on Regional Brightness Perception.
- Author
-
Li, Pei, Wei, Wangjuan, Pan, Xiaoying, Wang, Hao, and Mu, Yuanzhen
- Subjects
ALGORITHMS ,IMAGE intensifiers ,IMAGE processing ,PEDESTRIANS ,HUMAN fingerprints - Abstract
Existing algorithms for enhancing low-light images predominantly focus on the low-light region, which leads to over-enhancement of the glare region, and the high complexity of the algorithm makes it difficult to apply it to embedded devices. In this paper, a GS-AGC algorithm based on regional luminance perception is proposed. The indirect perception of the human eye's luminance vision was taken into account. All similar luminance pixels that satisfied the luminance region were extracted, and adaptive adjustment processing was performed for the different luminance regions of low-light images. The proposed method was evaluated experimentally on real images, and objective evidence was provided to show that its processing effect surpasses that of other comparable methods. Furthermore, the potential practical value of GS-AGC was highlighted through its effective application in road pedestrian detection and face detection. The algorithm in this paper not only effectively suppressed glare but also achieved the effect of overall image quality enhancement. It can be easily combined with the embedded hardware FPGA for acceleration to improve real-time image processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A Convex Combination–Variable-Step-Size Least Mean p -Norm Algorithm.
- Author
-
Zhu, Boyu, Wang, Biao, Cai, Banggui, and Zhu, Yunan
- Subjects
CHANNEL estimation ,ALGORITHMS ,GAUSSIAN function ,PROBLEM solving ,ADAPTIVE filters - Abstract
Underwater acoustic channels often have to face the interference of impulsive noise, which is usually modeled by α-stable distribution in simulation experiments. To solve the problem of underwater acoustic channel estimation under impulsive noise, this paper proposes a convex combination–variable-step-size least mean p-norm algorithm. The algorithm incorporates a convex combination into the variable-step-size least mean p-norm algorithm and uses the convex combination of different convergence domains provided by changing the parameters of the Gaussian function to further improve the effect after convergence. The simulation results of channel estimation show that the convex combination–variable-step-size least mean p-norm algorithm provides a more stable, robust, and universal solution than the variable-step-size least mean p-norm algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Numeric Validation of the Inversion Model of Electrical Resistivity Imaging Method using the Levenberg-Marquardt Algorithm.
- Author
-
Tuan Anh Nguyen
- Subjects
STRUCTURAL health monitoring ,MODEL validation ,ALGORITHMS ,ELECTRICAL resistivity ,COMPUTER simulation ,MAGNETOTELLURICS - Abstract
This paper introduces a new application of the Electrical Resistivity Imaging (ERI) method within the realm of structural assessment, deviating from its conventional use in geology. The study presents an innovative inversion model that incorporates the Levenberg-Marquardt algorithm, representing a notable leap in seamlessly integrating ERI into structural analysis. Rigorous validation of the inversion methodology is conducted through extensive benchmarking against simulated reference data, focusing on 1D and 2D resistivity distributions within timber specimens. By utilizing known resistivity fields, the paper quantitatively validates the accuracy of reconstructed models obtained through numerical simulations. Notably, both longitudinal and transverse surveys exhibit exceptional outcomes, showcasing a high correlation with the actual resistivity profiles, achieved within a concise 10-13 iterations. This meticulous validation process conclusively underscores the effectiveness and precision of the proposed inversion approach. Beyond its scientific contribution, this research expands the conventional boundaries of ERI application and establishes it as an invaluable tool for structural monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. A Study of Entity Relationship Extraction Algorithms Based on Symmetric Interaction between Data, Models, and Inference Algorithms.
- Author
-
Feng, Ping, Su, Nannan, Xing, Jiamian, Bian, Jing, and Ouyang, Dantong
- Subjects
MACHINE learning ,ALGORITHMS ,CHINESE language ,WORD recognition ,SEMANTICS - Abstract
The purpose of this paper is to address the extraction of entities and relationships from unstructured Chinese text, with a particular emphasis on the challenges of Named Entity Recognition (NER) and Relation Extraction (RE). This will be achieved by integrating external lexical information and utilizing the abundant semantic information available in Chinese. We utilize a pipeline model that is applied separately to NER and RE by introducing an innovative NER model that integrates Chinese pinyin, characters, and words to enhance recognition capabilities. Simultaneously, we incorporate information such as entity distance, sentence length, and part-of-speech to improve the performance of relation extraction. We also delve into the interactions among data, models, and inference algorithms to improve learning efficiency in addressing this challenge. In comparison to existing methods, our model has achieved significant results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Dynamic Positioning Control of Large Ships in Rough Sea Based on an Improved Closed-Loop Gain Shaping Algorithm.
- Author
-
Song, Chunyu, Guo, Teer, Sui, Jianghua, and Zhang, Xianku
- Subjects
OCEAN conditions (Weather) ,WIND pressure ,SHIPS ,ALGORITHMS ,MATHEMATICAL decoupling - Abstract
In order to solve the problem of the dynamic positioning control of large ships in rough sea and to meet the need for fixed-point operations, this paper proposes a dynamic positioning controller that can effectively achieve large ships' fixed-point control during Level 9 sea states (wind force Beaufort No. 10). To achieve a better control effect, a large ship's forward motion is decoupled to establish a mathematical model of the headwind stationary state. Meanwhile, the closed-loop gain shaping algorithm is combined with the exact feedback linearization algorithm to design the speed controller and the course-keeping controller. This effectively solves the problem of strong external interferences impacting the control system in rough seas and guarantees the comprehensive index of robustness performance. In this paper, three large ships—the "Mariner", "Taian kou", and "Galaxy"—are selected as the research objects for simulation research and the final fixing error is less than 10 m. It is proven that the method is safe, feasible, practical, and effective, and provides technical support for the design and development of intelligent marine equipment for use in rough seas. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A Novel Zero-Velocity Interval Detection Algorithm for a Pedestrian Navigation System with Foot-Mounted Inertial Sensors.
- Author
-
Wang, Xiaotao, Li, Jiacheng, Xu, Guangfei, and Wang, Xingyu
- Subjects
INERTIAL navigation systems ,PEDESTRIANS ,HUMAN mechanics ,MOTION ,ALGORITHMS ,RUNNING speed ,DETECTORS ,WALKING speed - Abstract
The zero-velocity update (ZUPT) algorithm is a pivotal advancement in pedestrian navigation accuracy, utilizing foot-mounted inertial sensors. Its key issue hinges on accurately identifying periods of zero-velocity during human movement. This paper introduces an innovative adaptive sliding window technique, leveraging the Fourier Transform to precisely isolate the pedestrian's gait frequency from spectral data. Building on this, the algorithm adaptively adjusts the zero-velocity detection threshold in accordance with the identified gait frequency. This adaptation significantly refines the accuracy in detecting zero-velocity intervals. Experimental evaluations reveal that this method outperforms traditional fixed-threshold approaches by enhancing precision and minimizing false positives. Experiments on single-step estimation show the adaptability of the algorithm to motion states such as slow, fast, and running. Additionally, the paper demonstrates pedestrian trajectory localization experiments under a variety of walking conditions. These tests confirm that the proposed method substantially improves the performance of the ZUPT algorithm, highlighting its potential for pedestrian navigation systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Mathematically Improved XGBoost Algorithm for Truck Hoisting Detection in Container Unloading.
- Author
-
Wu, Nian, Hu, Wenshan, Liu, Guo-Ping, and Lei, Zhongcheng
- Subjects
LOADING & unloading ,TRUCKS ,ALGORITHMS ,WEATHER ,TRUCK loading & unloading ,MATHEMATICAL models ,SHIPPING containers - Abstract
Truck hoisting detection constitutes a key focus in port security, for which no optimal resolution has been identified. To address the issues of high costs, susceptibility to weather conditions, and low accuracy in conventional methods for truck hoisting detection, a non-intrusive detection approach is proposed in this paper. The proposed approach utilizes a mathematical model and an extreme gradient boosting (XGBoost) model. Electrical signals, including voltage and current, collected by Hall sensors are processed by the mathematical model, which augments their physical information. Subsequently, the dataset filtered by the mathematical model is used to train the XGBoost model, enabling the XGBoost model to effectively identify abnormal hoists. Improvements were observed in the performance of the XGBoost model as utilized in this paper. Finally, experiments were conducted at several stations. The overall false positive rate did not exceed 0.7% and no false negatives occurred in the experiments. The experimental results demonstrated the excellent performance of the proposed approach, which can reduce the costs and improve the accuracy of detection in container hoisting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. IIR Shelving Filter, Support Vector Machine and k-Nearest Neighbors Algorithm Application for Voltage Transients and Short-Duration RMS Variations Analysis.
- Author
-
Liubčuk, Vladislav, Kairaitis, Gediminas, Radziukynas, Virginijus, and Naujokaitis, Darius
- Subjects
SUPPORT vector machines ,K-nearest neighbor classification ,LITERATURE reviews ,VOLTAGE ,ALGORITHMS - Abstract
This paper focuses on both voltage transients and short-duration RMS variations, and presents a unique and heterogeneous approach to their assessment by applying AI tools. The database consists of both real (obtained from Lithuanian PQ monitoring campaigns) and synthetic data (obtained from the simulation and literature review). Firstly, this paper investigates the fundamental grid component and its harmonics filtering with an IIR shelving filter. Secondly, in a key part, both SVM and KNN are used to classify PQ events by their primary cause in the voltage–duration plane as well as by the type of short circuit in the three-dimensional voltage space. Thirdly, since it seemed to be difficult to interpret the results in the three-dimensional space, the new method, based on Clarke transformation, is developed to convert it to two-dimensional space. The method shows an outstanding performance by avoiding the loss of important information. In addition, a geometric analysis of the fault voltage in both two-dimensional and three-dimensional spaces revealed certain geometric patterns that are undoubtedly important for PQ classification. Finally, based on the results of a PQ monitoring campaign in the Lithuanian distribution grid, this paper presents a unique discussion regarding PQ assessment gaps that need to be solved in anticipation of a great leap forward and refers them to PQ legislation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A multi-layer composite identification scheme of cryptographic algorithm based on hybrid random forest and logistic regression model.
- Author
-
Yuan, Ke, Huang, Yabing, Du, Zhanfei, Li, Jiabao, and Jia, Chunfu
- Subjects
RANDOM forest algorithms ,LOGISTIC regression analysis ,REGRESSION analysis ,ALGORITHMS ,DECISION trees ,IDENTIFICATION ,RSA algorithm - Abstract
Cryptographic technology can effectively defend against malicious attackers to attack sensitive and private information. The core of cryptographic technology is cryptographic algorithm, and the cryptographic algorithm identification is the premise of in-depth analysis of cryptography. In the cryptanalysis of unknown cryptographic algorithm, the primary task is to identify the cryptographic algorithm used in the encryption and then carry out targeted analysis. With the rapid growth of Internet data, the increasing complexity of communication environment, and the increasing number of cryptographic algorithms, the single-layer identification scheme of cryptographic algorithm faces great challenges in terms of identification ability and stability. To solve these problems, on the basis of existing identification schemes, this paper proposes a new cluster division scheme CMSSBAM-cluster, and then proposes a multi-layer composite identification scheme of cryptographic algorithm using a composite structure. The scheme adopts the method of cluster division and single division to identify various cryptographic algorithms. Based on the idea of ensemble, the scheme uses the hybrid random forest and logistic regression (HRFLR) model for training, and conducts research on a data set consisting of 1700 ciphertext files encrypted by 17 cryptographic algorithms. In addition, two ensemble learning models, hybrid gradient boosting decision tree and logistic regression (HGBDTLR) model and hybrid k-neighbors and random forest (HKNNRF) model are used as controls to conduct controlled experiments in this paper. The experimental results show that multi-layer composite identification scheme of cryptographic algorithm based on HRFLR model has an accuracy rate close to 100% in the cluster division stage, and the identification results are higher than those of the other two models in both the cluster division and single division stages. In the last layer of cluster division, the identification accuracy of ECB and CBC encryption modes in block cryptosystem is significantly higher than that of the other two classification models by 35.2% and 36.1%. In single division, the identification accuracy is higher than HGBDTLR with a maximum of 9.8%, and higher than HKNNRF with a maximum of 7.5%. At the same time, the scheme proposed in this paper has significantly improved the identification effect compared with the single division identification accuracy of 17 cryptosystem directly and the 17 classification accuracy of 5.9% compared with random classification, which indicates that multi-layer composite identification scheme of cryptographic algorithm based on HRFLR model has significant advantages in the accuracy of identifying multiple cryptographic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Autonomous localized path planning algorithm for UAVs based on TD3 strategy.
- Author
-
Feiyu, Zhao, Dayan, Li, Zhengxu, Wang, Jianlin, Mao, and Niya, Wang
- Subjects
DRONE aircraft ,ALGORITHMS ,PROBLEM solving - Abstract
Unmanned Aerial Vehicles are useful tools for many applications. However, autonomous path planning for Unmanned Aerial Vehicles in unfamiliar environments is a challenging problem when facing a series of problems such as poor consistency, high influence by the native controller of the Unmanned Aerial Vehicles. In this paper, we investigate reinforcement learning-based autonomous local path planning methods for Unmanned Aerial Vehicles with high autonomous decision-making capability and locally high portability. We propose an autonomous local path planning algorithm based on the TD3 strategy to solve the problem of local obstacle avoidance and path planning in unfamiliar environments using autonomous decision-making of Unmanned Aerial Vehicles. The simulation results on Gazebo show that our method can effectively realize the autonomous local path planning task for Unmanned Aerial Vehicles, the success rate of path planning with our method can reach 93% under the interference of no obstacles, and 92% in the environment with obstacles. Finally, our method can be used for autonomous path planning of Unmanned Aerial Vehicles in unfamiliar environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.