72,851 results
Search Results
2. Special issue "Discrete optimization: Theory, algorithms and new applications".
- Author
-
Werner, Frank
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,ONLINE algorithms ,LINEAR matrix inequalities ,ALGORITHMS ,ROBUST stability analysis ,NONLINEAR integral equations - Abstract
This document is an editorial for a special issue of the journal AIMS Mathematics on the topic of discrete optimization. The issue includes 21 papers covering a range of subjects, including molecular trees, network systems, variational inequality problems, scheduling, image restoration, spectral clustering, integral equations, convex functions, graph products, optimization algorithms, air quality prediction, humanitarian planning, inertial methods, neural networks, transportation problems, emotion identification, fixed-point problems, structural engineering design, single machine scheduling, and ensemble learning. The papers present new theoretical results, algorithms, and applications in these areas. The guest editor expresses gratitude to the journal staff and reviewers and hopes that readers will find inspiration for their own research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
3. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
4. The Space Complexity of Consensus from Swap.
- Author
-
Ovens, Sean
- Subjects
ALGORITHMS ,GENERALIZATION - Abstract
Nearly thirty years ago, it was shown that \(\Omega (\sqrt {n})\) read/write registers are needed to solve randomized wait-free consensus among n processes. This lower bound was improved to n registers in 2018, which exactly matches known algorithms. The \(\Omega (\sqrt {n})\) space complexity lower bound actually applies to a class of objects called historyless objects, which includes registers, test-and-set objects, and readable swap objects. However, every known n-process obstruction-free consensus algorithm from historyless objects uses Ω (n) objects. In this paper, we give the first Ω (n) space complexity lower bounds on consensus algorithms for two kinds of historyless objects. First, we show that any obstruction-free consensus algorithm from swap objects uses at least n-1 objects. More generally, we prove that any obstruction-free k-set agreement algorithm from swap objects uses at least \(\lceil \frac{n}{k}\rceil - 1\) objects. The k-set agreement problem is a generalization of consensus in which processes agree on no more than k different output values. This is the first non-constant lower bound on the space complexity of solving k-set agreement with swap objects when k > 1. We also present an obstruction-free k-set agreement algorithm from n-k swap objects, which exactly matches our lower bound when k=1. Second, we show that any obstruction-free binary consensus algorithm from readable swap objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. When b is a constant, this asymptotically matches the best known obstruction-free consensus algorithms from readable swap objects with unbounded domains. Since any historyless object can be simulated by a readable swap object with the same domain, our results imply that any obstruction-free consensus algorithm from historyless objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. For b = 2, we show a slightly better lower bound of n-2. There is an obstruction-free binary consensus algorithm using 2n-1 readable swap objects with domain size 2, asymptotically matching our lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Digitalized Control Algorithm of Bridgeless Totem-Pole PFC with a Simple Control Structure Based on the Phase Angle.
- Author
-
Lee, Gi-Young, Park, Hae-Chan, Ji, Min-Woo, and Kim, Rae-Young
- Subjects
ELECTRIC current rectifiers ,ELECTRONIC paper ,PHASE-locked loops ,ALGORITHMS ,ANGLES ,VOLTAGE - Abstract
Compared to the conventional boost power factor correction (PFC) converter, a totem-pole bridgeless PFC has high efficiency because it does not have an input diode rectifier stage, but a current spike may occur when the polarity of the grid voltage changes. This paper proposes a digital control algorithm for bridgeless totem-pole PFC with a simple control structure based on the phase angle of grid voltage. The proposed algorithm has a PI-based double-loop control structure and performs DC-link voltage and input inductor current control. Rectifying switches operate based on the proposed rectification algorithm using phase angle information calculated through a single-phase phase-locked loop (PLL) to prevent current spikes. The feed-forward duty ratio value is calculated according to the polarity of the grid voltage and added to the double-loop controller to perform appropriate power factor control. The performance and feasibility of the proposed control algorithm are verified through a 3 kW hardware prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
7. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ELECTRONIC paper ,ATRIAL fibrillation ,ALGORITHMS ,HEART failure ,HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Research on 3D point cloud alignment algorithm based on SHOT features.
- Author
-
Fu, Zheng, Zhang, Enzhong, Sun, Ruiyang, Zang, Jiaran, and Zhang, Wei
- Subjects
POINT cloud ,ALGORITHMS ,FEATURE extraction - Abstract
To overcome the problem of the high initial position of the point cloud required by the traditional Iterative Closest Point (ICP) algorithm, in this paper, we propose a point cloud registration method based on normal vector and directional histogram features (SHOT). Firstly, a hybrid filtering method based on the voxel idea is proposed and verified using the measured point cloud data, and the noise removal rates of 97.5%, 97.8%, and 93.8% are obtained. Secondly, in terms of feature point extraction, the original algorithm is optimized, and the optimized algorithm can better extract the missing part of the point cloud. Finally, a fine alignment method based on normal vector and directional histogram features (SHOT) is proposed, and the improved algorithm is compared with the existing algorithm. Taking the Stanford University point cloud data and the self-measured point cloud data as examples, the plotted iteration-error plots can be concluded that the improved method can reduce the number of iterations by 40.23% and 37.62%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Study on tiered storage algorithm based on heat correlation of astronomical data.
- Author
-
Ye, Xin-Chen, Zhang, Hai-Long, Wang, Jie, Zhang, Ya-Zhou, Du, Xu, Wu, Han, and Riccio, Giuseppe
- Subjects
RADIO telescopes ,GEODETIC astronomy ,PULSAR detection ,ELECTRONIC data processing ,ALGORITHMS ,CLOUD storage - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five- hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Research on fabric surface defect detection algorithm based on improved Yolo_v4.
- Author
-
Li, Yuanyuan, Song, Liyuan, Cai, Yin, Fang, Zhijun, and Tang, Ming
- Subjects
SURFACE defects ,FEATURE extraction ,ALGORITHMS ,INDUSTRIAL sites ,TEXTILES ,PROBLEM solving - Abstract
In industry, the task of defect classification and defect localization is an important part of defect detection system. However, existing studies only focus on one task and it is difficult to ensure the accuracy of both tasks. This paper proposes a defect detection system based on improved Yolo_v4, which greatly improves the detection ability of minor defects. For K_Means algorithm clustering prianchors question with strong subjectivity, the paper proposes the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to determine the number of Anchors. To solve the problem of low detection rate of small targets caused by insufficient reuse rate of low-level features in CSPDarknet53 feature extraction network, this paper proposes an ECA-DenseNet-BC-121 feature extraction network to improve it. And the Dual Channel Feature Enhancement (DCFE) module is proposed to improve the local information loss and gradient propagation obstruction caused by quad chain convolution in PANet networks to improve the robustness of the model. The experimental results on the fabric surface defect detection datasets show that the mAP of the improved Yolo_v4 is 98.97%, which is 7.67% higher than SSD, 3.75% higher than Faster_RCNN, 10.82% higher than Yolo_v4 tiny, and 5.35% higher than Yolo_v4, and the detection speed reaches 39.4 fps. It can meet the real-time monitoring needs of industrial sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Face Verification Algorithms for UAV Applications: An Empirical Comparative Analysis.
- Author
-
Diez-Tomillo, Julio, Alcaraz-Calero, Jose M., and Qi Wang
- Subjects
RESCUE work ,ALGORITHMS ,PUBLIC safety ,COMPUTER vision ,PUBLIC administration ,DRONE aircraft - Abstract
Unmanned Aerial Vehicles (UAVs) are revolutionising diverse computer vision use case domains, from public safety surveillance to Search and Rescue (SAR), and other emergency management and disaster relief operations. The growing need for accurate face verification algorithms has prompted an exploration of synergies between UAVs and face verification. This promises cost-effective, wide-area, non-intrusive person verification. Real-world human-centric use cases such as a ”Drone Guard Angel” for vulnerable people can contribute to public safety management and offload significant police resources. These scenarios demand efficient face verification to distinguish correctly the end users for authentication, authorisation and customised services. This paper investigates the suitability of existing solutions, and analyses five state-of-the-art candidate face verification algorithms. Informed by the advantages and disadvantages of existing solutions, the paper proposes an extended dataset and a refined face verification pipeline. Subsequently, it conducts empirical evaluation of these algorithms using the proposed pipeline and dataset in terms of inference times and the distribution of the similarity indexes. Furthermore, this paper provides essential guidance for algorithm selection and deployment in UAV-based applications. Two candidate algorithms, ArcFace and FaceNet512, have emerged as the top performers. The choice between them will depend on the specific use case requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Combining Improved Meanshift and Adaptive Shi-Tomasi Algorithms for a Photovoltaic Panel Segmentation Strategy.
- Author
-
Huang, Chao, Chao, Xuewei, Zhou, Weiji, and Gong, Lijiao
- Subjects
IMAGE segmentation ,ALGORITHMS - Abstract
To achieve effective and accurate segmentation of photovoltaic panels in various working contexts, this paper proposes a comprehensive image segmentation strategy that integrates an improved Meanshift algorithm and an adaptive Shi-Tomasi algorithm. This approach effectively addresses the challenge of low precision in segmenting target regions and boundary contours in routine photovoltaic panel inspection. Firstly, based on the image information of photovoltaic panels collected under different environments by cameras, an improved Meanshift algorithm based on platform histogram optimization is used for preliminary processing, and images containing target information are cut out; then, the adaptive Shi-Tomasi algorithm is used to extract and screen feature points from the target area; finally, the extracted feature points generate the segmentation contour of the target photovoltaic panel, achieving accurate segmentation of the target area and boundary contour of the photovoltaic panel. Experiments verified that in photovoltaic panel images under different background environments, the method proposed in this paper enhances the accuracy of segmenting the target area and boundary contour of photovoltaic panels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Maneuvering Decision Making Based on Cloud Modeling Algorithm for UAV Evasion–Pursuit Game.
- Author
-
Huang, Hanqiao, Weng, Weiye, Zhou, Huan, Jiang, Zijian, and Dong, Yue
- Subjects
MANEUVERING boards ,DECISION making ,DRONE aircraft ,ALGORITHMS - Abstract
When facing problems in the aerial pursuit game, most of the current unmanned aerial vehicles (UAVs) have good maneuverability performance, but it is difficult to utilize the overload maneuverability of UAVs properly; further, UAVs tend to be more costly, and it is often difficult to effectively prevent the enemy from reaching the tailgating position behind the UAV in the aerial pursuit game. Therefore, there is a pressing need for a maneuvering algorithm that can effectively allow a UAV to quickly protect itself in a disadvantageous position, stably and effectively select a maneuver with the maneuvering algorithm, and stably and effectively establish an advantage by moving to an advantageous position. Therefore, this paper establishes a cloud model-based UAV-maneuvering aerial pursuit decision-making model based on pursuit-and-evasion game positions. Based on the evaluation of the latter, when the UAV is at a disadvantage, we use the constructed defensive maneuver expert pool to abandon the disadvantageous position. When the UAV is at an advantage, we use cloud model-based pursuit-and-evasion game maneuvering decision making to establish an advantageous position. According to the results of the simulation examples, the maneuvering decision-making method designed in this paper confirms that the UAV can quickly abandon its position and establish an advantage in case of parity or disadvantage and that it can also stably establish a tail-chasing position in case of advantage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Utilizing tables, figures, charts and graphs to enhance the readability of a research paper.
- Author
-
Divecha C. A., Tullu M. S., and Karande S.
- Subjects
GRAPHIC arts ,READABILITY (Literary style) ,SERIAL publications ,RESEARCH methodology ,COPYRIGHT ,MEDICAL research ,ALGORITHMS - Abstract
The authors offer observation on utilizing tables, figures, charts and graphs to help understand the research presented in a simple manner but also engage and sustain the reader's interest. Topics discussed include benefits provided by the use of tables/figures/charts/graphs, general methodology of design and submission, and copyright issues of using material from government publications/public domain.
- Published
- 2023
- Full Text
- View/download PDF
16. A novel automatic annotation method for whole slide pathological images combined clustering and edge detection technique.
- Author
-
Ding, Wei‐long, Liao, Wan‐yin, Zhu, Xiao‐jie, and Zhu, Hong‐bo
- Subjects
SUPERVISED learning ,DEEP learning ,ANNOTATIONS ,IMAGE processing ,ALGORITHMS ,PIXELS - Abstract
Pixel‐level labeling of regions of interest in an image is a key step in building a labeled training dataset for supervised deep learning networks of images. However, traditional manual labeling of cancerous regions in digital pathological images by doctors is time‐consuming and inefficient. To address this issue, this paper proposes an automatic labeling method for whole slide images, which combines clustering and edge detection techniques. The proposed method utilizes the multi‐level feature fusion model and the Long‐Short Term Memory network to discriminate the cancerous nature of the whole slide images, thereby improving the classification accuracy of the whole slide images. Subsequently, the automatic labeling of cancerous regions is achieved by integrating a density‐based clustering algorithm and an edge point extraction algorithm, both based on the discriminated results of the cancerous properties of whole slide images. The experimental results demonstrate the effectiveness of the proposed method, which offers an efficient and accurate solution to the challenging task of cancerous region labeling in digital pathological images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. DESIGN OF SMART HOME SYSTEM BASED ON WIRELESS SENSOR NETWORK LINK STATUS AWARENESS ALGORITHM.
- Author
-
RONG XU
- Subjects
INTELLIGENT sensors ,WIRELESS sensor networks ,SMART homes ,DOMESTIC architecture ,ROUTING algorithms ,ALGORITHMS - Abstract
When wireless sensor networks are used in smart homes, the connection state will be unstable due to signal masking attenuation. This will cause low packet rate, high time delay and high cost in the network. In this paper, a network routing algorithm for wireless sensing based on connection conditions is designed. Secondly, the expected number of sends is proposed to evaluate the stability of links. Based on this, the following network signal delivery situation is forecasted in real time and quickly. According to the estimated expected number of transmissions, the path is dynamically corrected to effectively avoid attenuation in the channel and achieve optimal system performance. Experimental results show that the method proposed in this paper can improve the efficiency of message sending and reduce the routing cost under the condition of masking effect. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Efficient load balancing Adaptive BNBKnapsack Algorithm for Edge computing to improve performance of network.
- Author
-
Nagle, Malti and Kumar, Prakash
- Subjects
NETWORK performance ,EDGE computing ,ALGORITHMS ,LOAD balancing (Computer networks) ,ENERGY consumption ,HOSPITALS ,ROUTING algorithms - Abstract
INTRODUCTION: In present days, Automation of everything has become essential. Internet of things (IoT) play an important role among all medical advances of IT. In this paper, feasible solutions are discussed to compare and design better healthcare systems. A thorough investigation and survey of suitable approaches were done to select IoT based systems in hospitals consisting of various high precision sensors. OBJECTIVES: The challenge healthcare system face is to manage the real time patient’s data with high accuracy. Second challenge is at fog devices level to manage the load distribution to all sensors with limited availability of bandwidth. METHODS: This paper summarizes the selection criterions of suitable load balancing algorithms to reduce energy consumption and computational cost of fog devices and increase the network usage that are supposed to be used in IoT based healthcare systems. According to the survey BNBKnapack algorithm has been selected as best suitable approach to analyze the overall performance of fog devices and results are also verify the same. RESULTS: Comparative analysis of Overall performance of fog devices has been proposed with using SJF algorithm and Adaptive BNBKnapsack algorithm. It has been observed by analysing system performance, which is found as best among other load balancing algorithm Adaptive BNBKnapsack is successfully reduce the energy consumption by (99.29%), computational cost by (98.34%) and increase the network usage by (99.95%) of system CONCLUSION: It has been observed by analysing system performance, Adaptive BNBKnapsack Load balancing is successfully able to reduce the computational cost and energy consumption also increase the network usage of the fog network. The performance of the system is found best among other load balancing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A novel differential evolution algorithm with multi-population and elites regeneration.
- Author
-
Cao, Yang and Luan, Jingzheng
- Subjects
DIFFERENTIAL evolution ,EVOLUTIONARY algorithms ,DISTRIBUTION (Probability theory) ,ALGORITHMS ,GLOBAL optimization - Abstract
Differential Evolution (DE) is widely recognized as a highly effective evolutionary algorithm for global optimization. It has proven its efficacy in tackling diverse problems across various fields and real-world applications. DE boasts several advantages, such as ease of implementation, reliability, speed, and adaptability. However, DE does have certain limitations, such as suboptimal solution exploitation and challenging parameter tuning. To address these challenges, this research paper introduces a novel algorithm called Enhanced Binary JADE (EBJADE), which combines differential evolution with multi-population and elites regeneration. The primary innovation of this paper lies in the introduction of strategy with enhanced exploitation capabilities. This strategy is based on utilizing the sorting of three vectors from the current generation to perturb the target vector. By introducing directional differences, guiding the search towards improved solutions. Additionally, this study adopts a multi-population method with a rewarding subpopulation to dynamically adjust the allocation of two different mutation strategies. Finally, the paper incorporates the sampling concept of elite individuals from the Estimation of Distribution Algorithm (EDA) to regenerate new solutions through the selection process in DE. Experimental results, using the CEC2014 benchmark tests, demonstrate the strong competitiveness and superior performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Time–Frequency Signal Integrity Monitoring Algorithm Based on Temperature Compensation Frequency Bias Combination Model.
- Author
-
Guo, Yu, Li, Zongnan, Gong, Hang, Peng, Jing, and Ou, Gang
- Subjects
SIGNAL integrity (Electronics) ,TIME-frequency analysis ,ATOMIC clocks ,ARTIFICIAL satellites in navigation ,ALGORITHMS ,TIME measurements ,X chromosome - Abstract
To ensure the long-term stable and uninterrupted service of satellite navigation systems, the robustness and reliability of time–frequency systems are crucial. Integrity monitoring is an effective method to enhance the robustness and reliability of time–frequency systems. Time–frequency signals are fundamental for integrity monitoring, with their time differences and frequency biases serving as essential indicators. These indicators are influenced by the inherent characteristics of the time–frequency signals, as well as the links and equipment they traverse. Meanwhile, existing research primarily focuses on only monitoring the integrity of the time–frequency signals' output by the atomic clock group, neglecting the integrity monitoring of the time–frequency signals generated and distributed by the time–frequency signal generation and distribution subsystem. This paper introduces a time–frequency signal integrity monitoring algorithm based on the temperature compensation frequency bias combination model. By analyzing the characteristics of time difference measurements, constructing the temperature compensation frequency bias combination model, and extracting and monitoring noise and frequency bias features from the time difference measurements, the algorithm achieves comprehensive time–frequency signal integrity monitoring. Experimental results demonstrate that the algorithm can effectively detect, identify, and alert users to time–frequency signal faults. Additionally, the model and the integrity monitoring parameters developed in this paper exhibit high adaptability, making them directly applicable to the integrity monitoring of time–frequency signals across various links. Compared with traditional monitoring algorithms, the algorithm proposed in this paper greatly improves the effectiveness, adaptability, and real-time performance of time–frequency signal integrity monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A novel improved total variation algorithm for the elimination of scratch-type defects in high-voltage cable cross-sections.
- Author
-
Yu, Aihua, Shan, Lina, Zhu, Wen, Jie, Jing, and Hou, Beiping
- Subjects
CABLES ,COMPUTER vision ,CROSS-sectional imaging ,IMAGE intensifiers ,ALGORITHMS ,PARTIAL discharges - Abstract
In the quality inspection process of high-voltage cables, several commonly used indicators include cable length, insulation thickness, and the number of conductors within the core. Among these factors, the count of conductors holds particular significance as a key determinant of cable quality. Machine vision technology has found extensive application in automatically detecting the number of conductors in cross-sectional images of high-voltage cables. However, the presence of scratch-type defects in cut high-voltage cable cross-sections can significantly compromise the precision of conductor count detection. To address this problem, this paper introduces a novel improved total variation (TV) algorithm, marking the first-ever application of the TV algorithm in this domain. Considering the staircase effect, the direct use of the TV algorithm is prone to cause serious loss of image edge information. The proposed algorithm firstly introduces multimodal features to effectively mitigate the staircase effect. While eliminating scratch-type defects, the algorithm endeavors to preserve the original image's edge information, consequently yielding a noteworthy enhancement in detection accuracy. Furthermore, a dataset was curated, comprising images of cross-sections of high-voltage cables of varying sizes, each displaying an assortment of scratch-type defects. Experimental findings conclusively demonstrate the algorithm's exceptional efficiency in eradicating diverse scratch-type defects within high-voltage cable cross-sections. The average scratch elimination rate surpasses 90%, with an impressive 96.15% achieved on cable sample 4. A series of conducted ablation experiments in this paper substantiate a significant enhancement in cable image quality. Notably, the Edge Preservation Index (EPI) exhibits an improvement of approximately 20%, resulting in a substantial boost to conductor count detection accuracy, thus effectively enhancing the quality of high-voltage cable production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. An Improved Sorting Algorithm for Periodic PRI Signals Based on Congruence Transform.
- Author
-
Dong, Huixu, Ge, Yuanzheng, Zhou, Rui, and Wang, Hongyan
- Subjects
WAVELET transforms ,MATHEMATICAL decoupling ,ALGORITHMS ,SIGNALS & signaling - Abstract
Recently, a signal sorting algorithm based on the congruence transform has been proposed, which is effective in dealing with the staggered Pulse Repetition Interval (PRI) signals. It can effectively sort the staggered PRI signals and obtain the sub-PRI sequence directly without sub-PRI ranking, and it is less affected by interfered pulses and pulse loss. Nevertheless, we find that the algorithm causes pseudo-peaks in the remainder histogram when sorting signals such as sliding PRI, sinusoidal PRI, etc. (collectively referred to as periodic PRI signal in this paper) and pseudo-peaks will cause errors in signal sorting. To solve the issue of pseudo-peaks when sorting periodic PRI signals, an improved sorting algorithm based on congruence transform is proposed. According to the analysis of the congruence characteristics of the periodic PRI signal, a novel method is proposed to identify pseudo-peaks based on the histogram peak amplitude and symmetric difference set. The signal sorting algorithm based on congruence transform is improved to achieve a good sorting effect on periodic PRI signals. Simulation experiments demonstrate that the novel algorithm can effectively sort periodic PRI signals and improve Precall, P
d , and Pf by 6.9%, 5.1%, and 3.2%, respectively, compared to the typical similar algorithms. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
23. A Fast Detection Algorithm for Change Detection in National Forestland "One Map" Based on NLNE Quad-Tree.
- Author
-
Gao, Fei, Su, Xiaohui, Chen, Yuling, Wu, Baoguo, Tian, Yingze, Zhang, Wenjie, and Li, Tao
- Subjects
FORESTS & forestry ,FOREST management ,GEOGRAPHIC information systems ,VECTOR data ,MOUNTAIN forests ,ALGORITHMS - Abstract
The National Forestland "One Map" applies the boundaries and attributes of sub-elements to mountain plots by means of spatial data to achieve digital management of forest resources. The change detection and analysis of forest space and property is the key to determining the change characteristics, evolution trend and management effectiveness of forest land. The existing spatial overlay method, rasterization method, object matching method, etc., cannot meet the requirements of high efficiency and high precision at the same time. In this paper, we investigate a fast algorithm for the detection of changes in "One Map", taking Sichuan Province as an example. The key spatial characteristic extraction method is used to uniquely determine the sub-compartments. We construct an unbalanced quadtree based on the number of maximum leaf node elements (NLNE Quad-Tree) to narrow down the query range of the target sub-compartments and quickly locate the sub-compartments. Based on NLNE Quad-Tree, we establish a change detection model for "One Map" (NQT-FCDM). The results show that the spatial feature combination of barycentric coordinates and area can ensure the spatial uniqueness of 44.45 million sub-compartments in Sichuan Province with 1 m~0.000001 m precision. The NQT-FCDM constructed with 1000–6000 as the maximum number of leaf nodes has the best retrieval efficiency in the range of 100,000–500,000 sub-compartments. The NQT-FCDM shortens the time by about 75% compared with the traditional spatial union analysis method, shortens the time by about 50% compared with the normal quadtree and effectively solves the problem of generating a large amount of intermediate data in the spatial union analysis method. The NQT-FCDM proposed in this paper improves the efficiency of change detection in "One Map" and can be generalized to other industries applying geographic information systems to carry out change detection, providing a basis for the detection of changes in vector spatial data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Fast Decision-Tree-Based Series Partitioning and Mode Prediction Termination Algorithm for H.266/VVC.
- Author
-
Li, Ye, He, Zhihao, and Zhang, Qiuwen
- Subjects
VIDEO compression ,VIDEO coding ,TECHNOLOGICAL innovations ,ALGORITHMS ,MULTIMEDIA systems ,PARALLEL algorithms ,COMPUTATIONAL complexity ,DECISION trees ,RANDOM forest algorithms - Abstract
With the advancement of network technology, multimedia videos have emerged as a crucial channel for individuals to access external information, owing to their realistic and intuitive effects. In the presence of high frame rate and high dynamic range videos, the coding efficiency of high-efficiency video coding (HEVC) falls short of meeting the storage and transmission demands of the video content. Therefore, versatile video coding (VVC) introduces a nested quadtree plus multi-type tree (QTMT) segmentation structure based on the HEVC standard, while also expanding the intra-prediction modes from 35 to 67. While the new technology introduced by VVC has enhanced compression performance, it concurrently introduces a higher level of computational complexity. To enhance coding efficiency and diminish computational complexity, this paper explores two key aspects: coding unit (CU) partition decision-making and intra-frame mode selection. Firstly, to address the flexible partitioning structure of QTMT, we propose a decision-tree-based series partitioning decision algorithm for partitioning decisions. Through concatenating the quadtree (QT) partition division decision with the multi-type tree (MT) division decision, a strategy is implemented to determine whether to skip the MT division decision based on texture characteristics. If the MT partition decision is used, four decision tree classifiers are used to judge different partition types. Secondly, for intra-frame mode selection, this paper proposes an ensemble-learning-based algorithm for mode prediction termination. Through the reordering of complete candidate modes and the assessment of prediction accuracy, the termination of redundant candidate modes is accomplished. Experimental results show that compared with the VVC test model (VTM), the algorithm proposed in this paper achieves an average time saving of 54.74%, while the BDBR only increases by 1.61%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Global Maximum Power Point Tracking of Photovoltaic Module Arrays Based on an Improved Intelligent Bat Algorithm.
- Author
-
Chao, Kuei-Hsiang and Bau, Thi Thanh Truc
- Subjects
MAXIMUM power point trackers ,ALGORITHMS ,CLIMATE change ,VOLTAGE - Abstract
In this paper, a method based on an improved intelligent bat algorithm (IIBA) in cooperation with a voltage and current sensor was applied in maximum power point tracking (MPPT) for a photovoltaic module array (PVMA), where the power generation performance of a PVMA was enhanced. Due to the partial shading of the PVMA from climate changes or the surrounding environment, multiple peak values were generated on the power–voltage (P-V) curve, where the conventional MPPT technology could only track the local maximum power point (LMPP), hence the reduction in output power of PVMAs. Therefore, the IIBA-based MPPT was proposed in this paper to solve such issues and to ensure the capability of a PVMA in tracking the global maximum power point (GMPP) and utilization for enhancing the output power of a PVMA. Firstly, the Matlab/Simulink software was used to establish a boost converter model that simulated the actual 4-series–3-parallel PVMA under different shaded conditions, where the P-V curve with 1-peak, 2-peak, 3-peak and 4-peak values were generated. Subsequently, the tracking paces of the conventional bat algorithm (BA) were adjusted according to the gradient of the P-V curve for a PVMA. At the same time, 0.8 times the maximum power point (MPP) voltage V
mp under standard test conditions (STCs) for a PVMA was set as the initial tracking voltage. Lastly, the simulation results proved that under different environmental impacts, the proposed IIBA led to better performances in tracking both dynamic and steady responses. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
26. Partial Discharge Signal Denoising Algorithm Based on Aquila Optimizer–Variational Mode Decomposition and K-Singular Value Decomposition.
- Author
-
Zhong, Jun, Liu, Zhenyu, and Bi, Xiaowen
- Subjects
SIGNAL denoising ,PARTIAL discharges ,HILBERT-Huang transform ,ELECTRIC insulators & insulation ,ALGORITHMS - Abstract
Partial discharge (PD) is a primary factor leading to the deterioration of insulation in electrical equipment. However, it is hard for traditional methods to precisely extract PD signals in increasingly complex engineering environments. This paper proposes a new PD signal denoising method combining Aquila Optimizer–Variational Mode Decomposition (AO-VMD) and K-Singular Value Decomposition (K-SVD) algorithms. Firstly, the AO algorithm optimizes critical parameters of the VMD algorithm. For the PD signal overwhelmed by noise, the AO-VMD algorithm can decompose it and reconstruct it by using kurtosis. In this process, the majority of the noise is removed, and the characteristics of the original signal are shown. Subsequently, the K-SVD algorithm performs sparse decomposition on the signal after OA-VMD, constructs a learned dictionary, and captures the characteristics of the signal for continuous learning and updating. After the dictionary learning is completed, the best matching atoms from the dictionary are selected to precisely reconstruct the original noiseless signal. Finally, the proposed method is compared with three traditional algorithms, Adaptive Ensemble Empirical Mode Decomposition (AEEMD), SVD-VMD, and the Adaptive Wavelet Multilevel Soft Threshold algorithm, on the simulated signal and the actual engineering signal. The results both demonstrate that the algorithm proposed by this paper has superior noise reduction and signal extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A Novel IDS with a Dynamic Access Control Algorithm to Detect and Defend Intrusion at IoT Nodes.
- Author
-
Alazab, Moutaz, Awajan, Albara, Alazzam, Hadeel, Wedyan, Mohammad, Alshawi, Bandar, and Alturki, Ryan
- Subjects
INTRUSION detection systems (Computer security) ,ACCESS control ,INTERNET of things ,ALGORITHMS ,FALSE alarms ,MATHEMATICAL analysis - Abstract
The Internet of Things (IoT) is the underlying technology that has enabled connecting daily apparatus to the Internet and enjoying the facilities of smart services. IoT marketing is experiencing an impressive 16.7% growth rate and is a nearly USD 300.3 billion market. These eye-catching figures have made it an attractive playground for cybercriminals. IoT devices are built using resource-constrained architecture to offer compact sizes and competitive prices. As a result, integrating sophisticated cybersecurity features is beyond the scope of the computational capabilities of IoT. All of these have contributed to a surge in IoT intrusion. This paper presents an LSTM-based Intrusion Detection System (IDS) with a Dynamic Access Control (DAC) algorithm that not only detects but also defends against intrusion. This novel approach has achieved an impressive 97.16% validation accuracy. Unlike most of the IDSs, the model of the proposed IDS has been selected and optimized through mathematical analysis. Additionally, it boasts the ability to identify a wider range of threats (14 to be exact) compared to other IDS solutions, translating to enhanced security. Furthermore, it has been fine-tuned to strike a balance between accurately flagging threats and minimizing false alarms. Its impressive performance metrics (precision, recall, and F1 score all hovering around 97%) showcase the potential of this innovative IDS to elevate IoT security. The proposed IDS boasts an impressive detection rate, exceeding 98%. This high accuracy instills confidence in its reliability. Furthermore, its lightning-fast response time, averaging under 1.2 s, positions it among the fastest intrusion detection systems available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
DATA management ,INTERNET of things ,NETWORK performance ,BLOCKCHAINS ,SCALABILITY ,ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
TRACKING algorithms ,FILTERS & filtration ,VIDEO surveillance ,ALGORITHMS ,IMAGE segmentation ,RESEARCH personnel ,JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A flocking control algorithm of multi-agent systems based on cohesion of the potential function.
- Author
-
Li, Chenyang, Yang, Yonghui, Jiang, Guanjie, and Chen, Xue-Bo
- Subjects
COHESION ,POTENTIAL functions ,MULTIAGENT systems ,SOCIAL distance ,SOCIAL cohesion ,ALGORITHMS ,CHANGE agents - Abstract
Flocking cohesion is critical for maintaining a group's aggregation and integrity. Designing a potential function to maintain flocking cohesion unaffected by social distance is challenging due to the uncertainty of real-world conditions and environments that cause changes in agents' social distance. Previous flocking research based on potential functions has primarily focused on agents' same social distance and the attraction–repulsion of the potential function, ignoring another property affecting flocking cohesion: well depth, as well as the effect of changes in agents' social distance on well depth. This paper investigates the effect of potential function well depths and agent's social distances on the multi-agent flocking cohesion. Through the analysis, proofs, and classification of these potential functions, we have found that the potential function well depth is proportional to the flocking cohesion. Moreover, we observe that the potential function well depth varies with the agents' social distance changes. Therefore, we design a segmentation potential function and combine it with the flocking control algorithm in this paper. It enhances flocking cohesion significantly and has good robustness to ensure the flocking cohesion is unaffected by variations in the agents' social distance. Meanwhile, it reduces the time required for flocking formation. Subsequently, the Lyapunov theorem and the LaSalle invariance principle prove the stability and convergence of the proposed control algorithm. Finally, this paper adopts two subgroups with different potential function well depths and social distances to encounter for simulation verification. The corresponding simulation results demonstrate and verify the effectiveness of the flocking control algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. An Improved Evolutionary Multi-Objective Clustering Algorithm Based on Autoencoder.
- Author
-
Qiu, Mingxin, Zhang, Yingyao, Lei, Shuai, and Gu, Miaosong
- Subjects
ALGORITHMS ,EVOLUTIONARY algorithms ,DEEP learning - Abstract
Evolutionary multi-objective clustering (EMOC) algorithms have gained popularity recently, as they can obtain a set of clustering solutions in a single run by optimizing multiple objectives. Particularly, in one type of EMOC algorithm, the number of clusters k is taken as one of the multiple objectives to obtain a set of clustering solutions with different k. However, the numbers of clusters k and other objectives are not always in conflict, so it is impossible to obtain the clustering solutions with all different k in a single run. Therefore, evolutionary multi-objective k-clustering (EMO-KC) has recently been proposed to ensure this conflict. However, EMO-KC could not obtain good clustering accuracy on high-dimensional datasets. Moreover, EMO-KC's validity is not ensured as one of its objectives (SSD
exp , which is transformed from the sum of squared distances (SSD)) could not be effectively optimized and it could not avoid invalid solutions in its initialization. In this paper, an improved evolutionary multi-objective clustering algorithm based on autoencoder (AE-IEMOKC) is proposed to improve the accuracy and ensure the validity of EMO-KC. The proposed AE-IEMOKC is established by combining an autoencoder with an improved version of EMO-KC (IEMO-KC) for better accuracy, where IEMO-KC is improved based on EMO-KC by proposing a scaling factor to help effectively optimize the objective of SSDexp and introducing a valid initialization to avoid the invalid solutions. Experimental results on several datasets demonstrate the accuracy and validity of AE-IEMOKC. The results of this paper may provide some useful information for other EMOC algorithms to improve accuracy and convergence. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
33. Study on Relay Contact Bounce Based on the Adaptive Weight Rotation Template Matching Algorithm.
- Author
-
Zhao, Wenze, Yan, Jiaxing, Wang, Xin, Li, Wenhua, Yang, Xinglin, and Wang, Weiming
- Subjects
KINETIC energy ,ROTATIONAL motion ,CONTACT angle ,ALGORITHMS ,IMAGE processing ,ANGLES - Abstract
In order to analyze the relay action process from an imaging perspective and further investigate the bounce phenomenon of relay contacts during the contact process, this paper utilizes a high-speed shooting platform to capture images of relay action. In light of the situation where the stationary contact in the image is inclined and continuously changing, a rotation template matching algorithm based on adaptive weight is proposed. The algorithm identifies and obtains the inclination angle of the stationary contact, enabling the study of the relay contact bounce process. By extracting contact bounce distance data from the images, a bounce process curve is plotted. Combined with the analysis of the contact bounce process, the reasons for the bounce are explored. The results indicate that the proposed rotation template matching algorithm can accurately identify stationary contacts and their angles at different angles. By analyzing the contact status and bounce process of the relay contacts in conjunction with the relay structure, parameters such as the bounce time, bounce height, and time required to reach the maximum distance can be calculated. Additionally, the main reason for contact bounce in the relay studied in this paper is the limitation imposed on the continued movement of the stationary contact by the presence of the relay brackets when the kinetic energy of the contact is too high. This phenomenon occurs during the first vibration peak in the vibration process after the moving contact contacts the stationary contact. The research results provide a reference for further studying the relay contact bounce process, optimizing relay structure, and suppressing contact bounce. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Algorithms for Liver Segmentation in Computed Tomography Scans: A Historical Perspective.
- Author
-
Niño, Stephanie Batista, Bernardino, Jorge, and Domingues, Inês
- Subjects
COMPUTED tomography ,IMAGE processing ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE reconstruction algorithms - Abstract
Oncology has emerged as a crucial field of study in the domain of medicine. Computed tomography has gained widespread adoption as a radiological modality for the identification and characterisation of pathologies, particularly in oncology, enabling precise identification of affected organs and tissues. However, achieving accurate liver segmentation in computed tomography scans remains a challenge due to the presence of artefacts and the varying densities of soft tissues and adjacent organs. This paper compares artificial intelligence algorithms and traditional medical image processing techniques to assist radiologists in liver segmentation in computed tomography scans and evaluates their accuracy and efficiency. Despite notable progress in the field, the limited availability of public datasets remains a significant barrier to broad participation in research studies and replication of methodologies. Future directions should focus on increasing the accessibility of public datasets, establishing standardised evaluation metrics, and advancing the development of three-dimensional segmentation techniques. In addition, maintaining a collaborative relationship between technological advances and medical expertise is essential to ensure that these innovations not only achieve technical accuracy, but also remain aligned with clinical needs and realities. This synergy ensures their applicability and effectiveness in real-world healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Dynamic phasor measurement algorithm based on high-precision time synchronization.
- Author
-
Jie Zhang, Fuxin Li, Zhengwei Chang, Chunhua Hu, Chun Liu, and Sihao Tang
- Subjects
PHASOR measurement ,COVARIANCE matrices ,ELECTRIC power ,ELECTRIC power distribution grids ,SYNCHRONIZATION ,ALGORITHMS ,KALMAN filtering - Abstract
Ensuring the swift and precise tracking of power system signal parameters, especially the frequency, is imperative for the secure and stable operation of power grids. In instances of faults within the distribution network, abrupt changes in frequency may occur, presenting a challenge for existing algorithms that struggle to effectively track such signal variations. Addressing the need for enhanced performance in the face of frequency mutations, this paper introduces an innovative approach--the Covariance Reconstruction Extended Kalman Filter (CREKF) algorithm. Initially, the dynamic signal model of electric power is meticulously analyzed, establishing a dynamic signal relationship based on high-precision time source sampling tailored to the signal model's characteristics. Subsequently, the filter gain, covariance matrix, and variance iteration equation are determined based on the signal relationship among three sampling points. In a final step, recognizing the impact of the covariance matrix on algorithmic tracking ability, the paper proposes a covariance matrix reset mechanism utilizing hysteresis induced by output errors. Through extensive verification with simulated signals, the results conclusively demonstrate that the CREKF algorithm exhibits superior measurement accuracy and accelerated tracking speed when confronted with mutating signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Research on WSN reliable ranging and positioning algorithm for forest environment.
- Author
-
Wu, Peng, Yu, Le, Yi, Xiaomei, Xu, Liang, Liu, LiJuan, Yi, YuTong, Jiang, Tengteng, and Tao, Chunling
- Subjects
WIRELESS sensor networks ,ALGORITHMS - Abstract
Wireless sensor network (WSN) location is a significant research area. In complex environments like forests, inaccurate signal intensity ranging is a major challenge. To address this issue, this paper presents a reliable WSN distance measurement-positioning algorithm for forest environments. The algorithm divides the positioning area into several sub-regions based on the discrete coefficient of the collected signal strength. Then, using the fitting method based on the signal intensity value of each sub-region, the algorithm derives the reference points of the logarithmic distance path loss model and path loss index. Finally, the algorithm locates target nodes using anchor nodes in different regions. Additionally, to enhance the positioning accuracy, weight values are assigned to the positioning result based on the discrete coefficient of the signal intensity in each sub-region. Experimental results demonstrate that the proposed WSN algorithm has high precision in forest environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A Segmented Hybrid Algorithm for Beam Shaping Combining Iterative and Simulated Annealing Approaches.
- Author
-
Zhang, Xiaoyu, Zhang, Qi, and Chen, Genxiang
- Subjects
SIMULATED annealing ,STANDARD deviations ,ALGORITHMS ,OPTICAL communications ,LASER beams - Abstract
In recent years, laser technology has made significant advancements, yet there are specific requirements for the energy concentration and uniformity of lasers in various fields, such as optical communication, laser processing, 3D printing, etc. Beam shaping technology enables the transformation of ordinary Gaussian-distributed laser beams into square or circular flat-top uniform beams. Currently, LCOS-based beam shaping algorithms do not adequately meet these requirements, and most of these algorithms do not simultaneously consider the impact of phase quantization and zero-padding, leading to a decrease in the practicality of phase holograms. To address these issues, this paper proposes a novel segmented beam shaping algorithm that combines iterative and simulated annealing approaches. This paper validated the reliability of the proposed algorithm through numerical simulations. Compared to other algorithms, the proposed algorithm can effectively reduce the root mean square error by an average of nearly 37% and decrease the uniformity error by almost 39% without a significant decrease in diffraction efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Artificial Intelligence Algorithms for Healthcare.
- Author
-
Chumachenko, Dmytro and Yakovlev, Sergiy
- Subjects
ARTIFICIAL intelligence ,DEEP learning ,ALGORITHMS ,MACHINE learning ,INFORMATION technology ,MEDICAL care ,MOTION capture (Human mechanics) ,MEDICAL technology - Abstract
Artificial intelligence (AI) algorithms are playing a crucial role in transforming healthcare by enhancing the quality, accessibility, and efficiency of medical care, research, and operations. These algorithms enable healthcare providers to offer more accurate diagnoses, predict outcomes, and customize treatments to individual patient needs. AI also improves operational efficiency by automating routine tasks and optimizing resource management. However, there are challenges to adopting AI in healthcare, such as data privacy concerns and potential biases in algorithms. Collaboration among stakeholders is necessary to ensure ethical use of AI and its positive impact on the field. AI also has applications in medical research, preventive medicine, and public health. It is important to recognize that AI should augment, not replace, the expertise and compassionate care provided by healthcare professionals. The ethical implications and societal impact of AI in healthcare must be carefully considered, guided by fairness, transparency, and accountability principles. Several research papers in this special issue explore the application of AI algorithms in various aspects of healthcare, such as gait analysis for Parkinson's disease diagnosis, human activity recognition, heart disease prediction, compliance assessment with clinical protocols, epidemic management, neurological complications identification, fall prevention, leukemia diagnosis, and genetic clinical pathways. These studies demonstrate the potential of AI in improving medical diagnostics, patient monitoring, and personalized care. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
39. Multicore Parallelized Spatial Overlay Analysis Algorithm Using Vector Polygon Shape Complexity Index Optimization.
- Author
-
Fan, Junfu, Zuo, Jiwei, Sun, Guangwei, Shi, Zongwen, Gao, Yu, and Zhang, Yi
- Subjects
PARALLEL algorithms ,DIFFERENCE operators ,POLYGONS ,ALGORITHMS ,PARALLEL programming ,VECTOR data - Abstract
As core algorithms of geographic computing, overlay analysis algorithms typically have computation-intensive and data-intensive characteristics. It is highly important to optimize overlay analysis algorithms by parallelizing the vector polygons after reasonable data division. To address the problem of unbalanced data partitioning in the task decomposition process for parallel polygon overlay analysis and calculation, this paper presents a data partitioning method based on shape complexity index optimization, which achieves data equalization among multicore parallel computing tasks. Taking the intersection operator and difference operator of the Vatti algorithm as examples, six polygon shape indexes are selected to construct the shape complexity model, and the vector data are divided in accordance with the calculated shape complexity results. Finally, multicore parallelism is achieved based on OpenMP. The experimental results show that when a data set with a large amount of data is used, the effect of the multicore parallel execution of the Vatti algorithm's intersection operator and difference operator based on shape complexity division is clearly improved. With 16 threads, compared with the serial algorithm, speedups of 29 times and 32 times can be obtained. Compared with the traditional multicore parallel algorithm based on polygon number division, the speed can be improved by 33% and 29%, and the load balancing index is reduced. For a data set with a small amount of data, the acceleration effect of this method is similar to that of traditional methods involving multicore parallelism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Weather Radar High-Resolution Spectral Moment Estimation Using Bidirectional Extreme Learning Machine.
- Author
-
Zhongyuan Wang, Ling Qiao, Yu Jiang, Mingwei Shen, and Guodong Han
- Subjects
MACHINE learning ,POWER spectra ,RADAR meteorology ,PROBLEM solving ,ALGORITHMS - Abstract
Since the performance of the spectral moment estimation algorithm commonly used in engineering degrades under the conditions of low SNR, this paper introduces the Extreme Learning Machine (ELM) to the spectral moment estimation of weather signals based on the correlation of the signals of adjacent range cells. To solve the problem that the hidden layer nodes of ELM algorithm are difficult to be determined, the Bidirectional Extreme Learning Machine (B-ELM) algorithm is applied to achieve the high resolution of spectral moments. Firstly, to improve the SNR of the training samples, time-domain pulse signals are converted into weather power spectrum by Welch method. Then, the parameters of the B-ELM hidden layer nodes are directly calculated by backpropagation of network residuals. The model parameters are optimized according to the least-squares solution, where the optimal number of hidden layer nodes is determined adaptively. Finally, the optimized B-ELM model is employed for the spectral moment estimation of weather signals. The algorithm is validated to be fast and accurate for spectral moment estimation using the measured IDRA weather radar data and is easy to implement in engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Research on Microgrid Optimal Dispatching Based on a Multi-Strategy Optimization of Slime Mould Algorithm.
- Author
-
Zhang, Yi and Zhou, Yangkun
- Subjects
MICROGRIDS ,ELECTRIC power distribution grids ,SWARM intelligence ,ENERGY consumption ,WIND power ,ALGORITHMS - Abstract
In order to cope with the problems of energy shortage and environmental pollution, carbon emissions need to be reduced and so the structure of the power grid is constantly being optimized. Traditional centralized power networks are not as capable of controlling and distributing non-renewable energy as distributed power grids. Therefore, the optimal dispatch of microgrids faces increasing challenges. This paper proposes a multi-strategy fusion slime mould algorithm (MFSMA) to tackle the microgrid optimal dispatching problem. Traditional swarm intelligence algorithms suffer from slow convergence, low efficiency, and the risk of falling into local optima. The MFSMA employs reverse learning to enlarge the search space and avoid local optima to overcome these challenges. Furthermore, adaptive parameters ensure a thorough search during the algorithm iterations. The focus is on exploring the solution space in the early stages of the algorithm, while convergence is accelerated during the later stages to ensure efficiency and accuracy. The salp swarm algorithm's search mode is also incorporated to expedite convergence. MFSMA and other algorithms are compared on the benchmark functions, and the test showed that the effect of MFSMA is better. Simulation results demonstrate the superior performance of the MFSMA for function optimization, particularly in solving the 24 h microgrid optimal scheduling problem. This problem considers multiple energy sources such as wind turbines, photovoltaics, and energy storage. A microgrid model based on the MFSMA is established in this paper. Simulation of the proposed algorithm reveals its ability to enhance energy utilization efficiency, reduce total network costs, and minimize environmental pollution. The contributions of this paper are as follows: (1) A comprehensive microgrid dispatch model is proposed. (2) Environmental costs, operation and maintenance costs are taken into consideration. (3) Two modes of grid-tied operation and island operation are considered. (4) This paper uses a multi-strategy optimized slime mould algorithm to optimize scheduling, and the algorithm has excellent results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Survey on Machine Learning Biases and Mitigation Techniques.
- Author
-
Siddique, Sunzida, Haque, Mohd Ariful, George, Roy, Gupta, Kishor Datta, Gupta, Debashis, and Faruk, Md Jobair Hossain
- Subjects
MACHINE learning ,ALGORITHMS ,POLICY sciences ,BIAS (Law) ,MACHINE theory - Abstract
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Research on 3D Face Reconstruction Algorithm Based on ResNet and Transformer.
- Author
-
Yaermaimaiti, Yilihamu, Yan, Tianxing, Zhao, Yuhang, and Kari, Tusongjiang
- Subjects
TRANSFORMER models ,ALGORITHMS ,FEATURE extraction - Abstract
In view of the problems of high production cost, scarcity and lack of diversity of 3D face datasets, this paper designs an end-to-end self-supervised learning 3D face reconstruction algorithm with a single 2D face image as input, which only uses 2D face datasets to complete model training. First, the improved ResNet module is introduced to preprocess the input face image. The deep residual neural network has strong feature extraction and characterization ability for the image, which can provide rich high-level semantic feature maps for the subsequent subnetwork. Then, add transformer module completely based on self-attention mechanism to the parameter prediction subnetwork, which can make different parameters of the subnetwork focus on self-related feature map information and avoid interference from invalid feature map information, so as to further improve the parameter prediction accuracy of the subnetwork. Next, training, ablation and comparison experiments were conducted on CelebA, BFM and Photoface datasets, and the combined function of pixel loss function and perceptual loss function was selected as the loss function. The experimental results show that: compared with the historical optimal results of the same network structure, the scale-invariant depth error (SIDE) and mean angle deviation (MAD) are improved by 5.9% and 10.8%, respectively, which strongly proves the effectiveness of the algorithm. Finally, in order to verify the actual effect of the 3D face reconstruction algorithm, examples are selected in this paper for reconstruction. The 3D faces generated by the algorithm all have a good sense of reality, which intuitively and effectively proves the advancement of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Autonomous localized path planning algorithm for UAVs based on TD3 strategy.
- Author
-
Feiyu, Zhao, Dayan, Li, Zhengxu, Wang, Jianlin, Mao, and Niya, Wang
- Subjects
DRONE aircraft ,ALGORITHMS ,PROBLEM solving - Abstract
Unmanned Aerial Vehicles are useful tools for many applications. However, autonomous path planning for Unmanned Aerial Vehicles in unfamiliar environments is a challenging problem when facing a series of problems such as poor consistency, high influence by the native controller of the Unmanned Aerial Vehicles. In this paper, we investigate reinforcement learning-based autonomous local path planning methods for Unmanned Aerial Vehicles with high autonomous decision-making capability and locally high portability. We propose an autonomous local path planning algorithm based on the TD3 strategy to solve the problem of local obstacle avoidance and path planning in unfamiliar environments using autonomous decision-making of Unmanned Aerial Vehicles. The simulation results on Gazebo show that our method can effectively realize the autonomous local path planning task for Unmanned Aerial Vehicles, the success rate of path planning with our method can reach 93% under the interference of no obstacles, and 92% in the environment with obstacles. Finally, our method can be used for autonomous path planning of Unmanned Aerial Vehicles in unfamiliar environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Mobile Learning Tools to Support in Teaching Programming Logic and Design: A Systematic Literature Review.
- Author
-
COELHO, Regina Célia, MARQUES, Matheus F. P., and de OLIVEIRA, Tiago
- Subjects
MOBILE learning ,LOGIC programming ,LOGIC design ,LEARNING ,PROGRAMMING languages ,MOBILE apps - Abstract
Learning programming logic remains an obstacle for students from different academic fields. Considered one of the essential disciplines in the field of Science and Technology, it is vital to investigate the new tools or techniques used in the teaching and learning of Programming Language. This work presents a systematic literature review (SLR) on approaches using Mobile Learning methodology and the process of learning programming in introductory courses, including mobile applications and their evaluation and validation. We consulted three digital libraries, considering articles published from 2011 to 2022 related to Mobile Learning and Programming Learning. As a result, we found twelve mobile tools for learning or teaching programming logic. Most are free and used in universities. In addition, these tools positively affect the learning process, engagement, motivation, and retention, providing a better understanding, and improving content transmission. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Situational continuity-based air combat autonomous maneuvering decision-making.
- Author
-
Jian-dong Zhang, Yi-fei Yu, Li-hui Zheng, Qi-ming Yang, Guo-qing Shi, and Yong Wu
- Subjects
DRONE warfare ,DECISION making ,ALGORITHMS ,METHODOLOGY ,DUCTILITY - Abstract
In order to improve the performance of UAV’s autonomous maneuvering decision-making, this paper proposes a decision-making method based on situational continuity. The algorithm in this paper designs a situation evaluation function with strong guidance, then trains the Long Short-Term Memory (LSTM) under the framework of Deep Q Network (DQN) for air combat maneuvering decision-making. Considering the continuity between adjacent situations, the method takes multiple consecutive situations as one input of the neural network. To reflect the difference between adjacent situations, the method takes the difference of situation evaluation value as the reward of reinforcement learning. In different scenarios, the algorithm proposed in this paper is compared with the algorithm based on the Fully Neural Network (FNN) and the algorithm based on statistical principles respectively. The results show that, compared with the FNN algorithm, the algorithm proposed in this paper is more accurate and forwardlooking. Compared with the algorithm based on the statistical principles, the decision-making of the algorithm proposed in this paper is more efficient and its real-time performance is better. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. A Machine Learning Model to Predict Citation Counts of Scientific Papers in Otology Field.
- Author
-
Alohali, Yousef A., Fayed, Mahmoud S., Mesallam, Tamer, Abdelsamad, Yassin, Almuhawas, Fida, and Hagr, Abdulrahman
- Subjects
DECISION trees ,SERIAL publications ,NATURAL language processing ,BIBLIOMETRICS ,MACHINE learning ,REGRESSION analysis ,RANDOM forest algorithms ,CITATION analysis ,DESCRIPTIVE statistics ,PREDICTION models ,ARTIFICIAL neural networks ,MEDICAL research ,MEDICAL specialties & specialists ,ALGORITHMS - Abstract
One of the most widely used measures of scientific impact is the number of citations. However, due to its heavy-tailed distribution, citations are fundamentally difficult to predict but can be improved. This study was aimed at investigating the factors and parts influencing the citation number of a scientific paper in the otology field. Therefore, this work proposes a new solution that utilizes machine learning and natural language processing to process English text and provides a paper citation as the predicted results. Different algorithms are implemented in this solution, such as linear regression, boosted decision tree, decision forest, and neural networks. The application of neural network regression revealed that papers' abstracts have more influence on the citation numbers of otological articles. This new solution has been developed in visual programming using Microsoft Azure machine learning at the back end and Programming Without Coding Technology at the front end. We recommend using machine learning models to improve the abstracts of research articles to get more citations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Cost Optimal Production-Scheduling Model Based on VNS-NSGA-II Hybrid Algorithm—Study on Tissue Paper Mill.
- Author
-
Zhang, Huanhuan, Li, Jigeng, Hong, Mengna, Man, Yi, and He, Zhenglei
- Subjects
PAPER mills ,FLOW shop scheduling ,PRODUCTION scheduling ,INDUSTRIAL costs ,ALGORITHMS - Abstract
With the development of the customization concept, small-batch and multi-variety production will become one of the major production modes, especially for fast-moving consumer goods. However, this production mode has two issues: high production cost and the long manufacturing period. To address these issues, this study proposes a multi-objective optimization model for the flexible flow-shop to optimize the production scheduling, which would maximize the production efficiency by minimizing the production cost and makespan. The model is designed based on hybrid algorithms, which combine a fast non-dominated genetic algorithm (NSGA-II) and a variable neighborhood search algorithm (VNS). In this model, NSGA-II is the major algorithm to calculate the optimal solutions. VNS is to improve the quality of the solution obtained by NSGA-II. The model is verified by an example of a real-world typical FFS, a tissue papermaking mill. The results show that the scheduling model can reduce production costs by 4.2% and makespan by 6.8% compared with manual scheduling. The hybrid VNS-NSGA-II model also shows better performance than NSGA-II, both in production cost and makespan. Hybrid algorithms are a good solution for multi-objective optimization issues in flexible flow-shop production scheduling. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Community Discovery Algorithm Based on Multi-Relationship Embedding.
- Author
-
Dongming Chen, Mingshuo Nie, Jie Wang, and Dongqi Wang
- Subjects
EMBEDDED computer systems ,ALGORITHMS ,MATRICES (Mathematics) ,CONVOLUTIONAL neural networks ,MACHINE learning - Abstract
Complex systems in the real world often can be modeled as network structures, and community discovery algorithms for complex networks enable researchers to understand the internal structure and implicit information of networks. Existing community discovery algorithms are usually designed for single-layer networks or single-interaction relationships and do not consider the attribute information of nodes. However, many real-world networks consist of multiple types of nodes and edges, and there may be rich semantic information on nodes and edges. The methods for single-layer networks cannot effectively tackle multi-layer information, multi-relationship information, and attribute information. This paper proposes a community discovery algorithm based on multi-relationship embedding. The proposed algorithm first models the nodes in the network to obtain the embedding matrix for each node relationship type and generates the node embedding matrix for each specific relationship type in the network by node encoder. The node embedding matrix is provided as input for aggregating the node embedding matrix of each specific relationship type using a Graph Convolutional Network (GCN) to obtain the final node embedding matrix. This strategy allows capturing of rich structural and attributes information in multi-relational networks. Experiments were conducted on different datasets with baselines, and the results show that the proposed algorithm obtains significant performance improvement in community discovery, node clustering, and similarity search tasks, and compared to the baseline with the best performance, the proposed algorithm achieves an average improvement of 3.1% on Macro-F1 and 4.7% on Micro-F1, which proves the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. Automated analysis of pen-on-paper spirals for tremor detection, quantification, and differentiation.
- Author
-
Rajan, Roopa, Anandapadmanabhan, Reghu, Nageswaran, Sharmila, Radhakrishnan, Vineeth, Saini, Arti, Krishnan, Syam, Gupta, Anu, Vishnu, Venugopalan Y., Pandit, Awadh K., Singh, Rajesh Kumar, Radhakrishnan, Divya M, Singh, Mamta Bhushan, Bhatia, Rohit, Srivastava, Achal, Kishore, Asha, and Padma Srivastava, M. V.
- Subjects
STATISTICS ,RESEARCH ,CONFIDENCE intervals ,ANALYSIS of variance ,TASK performance ,HANDWRITING ,ACCELEROMETERS ,DYSTONIA ,MOVEMENT disorders ,TREMOR ,DRAWING ,DESCRIPTIVE statistics ,PARKINSON'S disease ,SENSITIVITY & specificity (Statistics) ,DATA analysis ,RECEIVER operating characteristic curves ,DATA analysis software ,ALGORITHMS - Abstract
OBJECTIVE: To develop an automated algorithm to detect, quantify, and differentiate between tremor using pen-on-paper spirals. METHODS: Patients with essential tremor (n = 25), dystonic tremor (n = 25), Parkinson’s disease (n = 25), and healthy volunteers (HV, n = 25) drew free-hand spirals. The algorithm derived the mean deviation (MD) and tremor variability from scanned images. MD and tremor variability were compared with 1) the Bain and Findley scale, 2) the Fahn–Tolosa–Marin tremor rating scale (FTM–TRS), and 3) the peak power and total power of the accelerometer spectra. Inter and intra loop widths were computed to differentiate between the tremor. RESULTS: MD was higher in the tremor group (48.9±26.3) than in HV (26.4±5.3; p < 0.001). The cut-off value of 30.3 had 80.9% sensitivity and 76.0% specificity for the detection of the tremor [area under the curve: 0.83; 95% confidence index (CI): 0.75, 0.91, p < 0.001]. MD correlated with the Bain and Findley ratings (rho = 0.491, p = 0 < 0.001), FTM–TRS part B (rho = 0.260, p = 0.032) and accelerometric measures of postural tremor (total power, rho = 0.366, p < 0.001; peak power, rho = 0.402, p < 0.001). Minimum Detectable Change was 19.9%. Inter loop width distinguished Parkinson’s disease spirals from dystonic tremor (p < 0.001, 95% CI: 54.6, 211.1), essential tremor (p = 0.003, 95% CI: 28.5, 184.9), or HV (p = 0.036, 95% CI: -160.4, -3.9). CONCLUSION: The automated analysis of pen-on-paper spirals generated robust variables to quantify the tremor and putative variables to distinguish them from each other. SIGNIFICANCE: This technique maybe useful for epidemiological surveys and follow-up studies on tremor. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.