151,192 results
Search Results
2. Socio‐technical issues in the platform‐mediated gig economy: A systematic literature review: An Annual Review of Information Science and Technology (ARIST) paper.
- Author
-
Dedema, Meredith and Rosenbaum, Howard
- Subjects
INFORMATION science ,TECHNOLOGY ,CORPORATE culture ,ALGORITHMS ,ECONOMICS - Abstract
The gig economy and gig work have grown quickly in recent years and have drawn much attention from researchers in different fields. Because the platform mediated gig economy is a relatively new phenomenon, studies have produced a range of interesting findings; of interest here are the socio‐technical issues that this work has surfaced. This systematic literature review (SLR) provides a snapshot of a range of socio‐technical issues raised in the last 12 years of literature focused on the platform mediated gig economy. Based on a sample of 515 papers gathered from nine databases in multiple disciplines, 132 were coded that specifically studied the gig economy, gig work, and gig workers. Three main socio‐technical themes were identified: (1) the digital workplace, which includes information infrastructure and digital labor that are related to the nature of gig work and the user agency; (2) algorithmic management, which includes platform governance, performance management, information asymmetry, power asymmetry, and system manipulation, relying on a diverse set of technological tools including algorithms and big data analytics; (3) ethical design, as a relevant value set that gig workers expect from the platform, which includes trust, fairness, equality, privacy, and transparency. A social informatics perspective is used to rethink the relationship between gig workers and platforms, extract the socio‐technical issues noted in prior research, and discuss the underexplored aspects of the platform mediated gig economy. The results draw attention to understudied yet critically important socio‐technical issues in the gig economy that suggest short‐ and long‐term opportunities for future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Discussion paper: implications for the further development of the successfully in emergency medicine implemented AUD2IT-algorithm.
- Author
-
Przestrzelski, Christopher, Jakob, Antonina, Jakob, Clemens, and Hoffmann, Felix R.
- Subjects
DOCUMENTATION ,CURRICULUM ,HUMAN services programs ,EMERGENCY medicine ,EXPERIENCE ,MEDICAL records ,ELECTRONIC publications ,ALGORITHMS ,PATIENTS' attitudes - Abstract
The AUD2IT-algorithm is a tool to structure the data, which is collected during an emergency treatment. The goal is on the one hand to structure the documentation of the data and on the other hand to give a standardised data structure for the report during handover of an emergency patient. AUD2IT-algorithm was developed to provide residents a documentation aid, which helps to structure the medical reports without getting lost in unimportant details or forgetting important information. The sequence of anamnesis, clinical examination, considering a differential diagnosis, technical diagnostics, interpretation and therapy is rather an academic classification than a description of the real workflow. In a real setting, most of these steps take place simultaneously. Therefore, the application of the AUD2IT-algorithm should also be carried out according to the real processes. A big advantage of the AUD2IT-algorithm is that it can be used as a structure for the entire treatment process and also is entirely usable as a handover protocol within this process to make sure, that the existing state of knowledge is ensured at each point of a team-timeout. PR-E-(AUD2IT)-algorithm makes it possible to document a treatment process that, in principle, does not have to be limited to the field of emergency medicine. Also, in the outpatient treatment the PR-E-(AUD2IT)-algorithm could be used and further developed. One example could be the preparation and allocation of needed resources at the general practitioner. The algorithm is a standardised tool that can be used by healthcare professionals of any level of training. It gives the user a sense of security in their daily work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Performance Evaluation of the Extractive Methods in Automatic Text Summarization Using Medical Papers.
- Author
-
Kus, Anil and Aci, Cigdem Inan
- Subjects
PERFORMANCE evaluation ,TEXT summarization ,MEDICAL sciences ,ALGORITHMS ,SEMANTICS - Abstract
Copyright of Gazi Journal of Engineering Sciences (GJES) / Gazi Mühendislik Bilimleri Dergisi is the property of Gazi Journal of Engineering Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
5. Special issue "Discrete optimization: Theory, algorithms and new applications".
- Author
-
Werner, Frank
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,ONLINE algorithms ,LINEAR matrix inequalities ,ALGORITHMS ,ROBUST stability analysis ,NONLINEAR integral equations - Abstract
This document is an editorial for a special issue of the journal AIMS Mathematics on the topic of discrete optimization. The issue includes 21 papers covering a range of subjects, including molecular trees, network systems, variational inequality problems, scheduling, image restoration, spectral clustering, integral equations, convex functions, graph products, optimization algorithms, air quality prediction, humanitarian planning, inertial methods, neural networks, transportation problems, emotion identification, fixed-point problems, structural engineering design, single machine scheduling, and ensemble learning. The papers present new theoretical results, algorithms, and applications in these areas. The guest editor expresses gratitude to the journal staff and reviewers and hopes that readers will find inspiration for their own research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
6. 基于多目标优化的联邦学习进化.
- Author
-
胡智勇, 于千城, 王之赐, and 张丽丝
- Subjects
FEDERATED learning ,ALGORITHMS ,PRIVACY - Abstract
Copyright of Application Research of Computers / Jisuanji Yingyong Yanjiu is the property of Application Research of Computers Edition and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
7. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
8. A review paper of optimal resource allocation algorithm in cloud environment.
- Author
-
Patadiya, Namrata and Bhatt, Nirav
- Subjects
RESOURCE allocation ,LITERATURE reviews ,SERVICE level agreements ,ALGORITHMS ,ELECTRONIC data processing ,CLOUD computing - Abstract
Cloud computing has become a popular approach for processing data and running computationally expensive services on a pay-as-you-go basis. Due to the ever-increasing requirement for cloud-based apps, appropriately allocating resources according to user requests while meeting service-level agreements between customers and service providers has become increasingly complex. An efficient and versatile resource allocation method is required to properly deploy these assets and meet user needs. The technique of distributing resources has become more arduous as user demand has increased. One of the key areas of research experts is how to design optimal solutions for this approach. In this paper, a literature review on proposed dynamic resource allocation approaches is introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. The Space Complexity of Consensus from Swap.
- Author
-
Ovens, Sean
- Subjects
ALGORITHMS ,GENERALIZATION - Abstract
Nearly thirty years ago, it was shown that \(\Omega (\sqrt {n})\) read/write registers are needed to solve randomized wait-free consensus among n processes. This lower bound was improved to n registers in 2018, which exactly matches known algorithms. The \(\Omega (\sqrt {n})\) space complexity lower bound actually applies to a class of objects called historyless objects, which includes registers, test-and-set objects, and readable swap objects. However, every known n-process obstruction-free consensus algorithm from historyless objects uses Ω (n) objects. In this paper, we give the first Ω (n) space complexity lower bounds on consensus algorithms for two kinds of historyless objects. First, we show that any obstruction-free consensus algorithm from swap objects uses at least n-1 objects. More generally, we prove that any obstruction-free k-set agreement algorithm from swap objects uses at least \(\lceil \frac{n}{k}\rceil - 1\) objects. The k-set agreement problem is a generalization of consensus in which processes agree on no more than k different output values. This is the first non-constant lower bound on the space complexity of solving k-set agreement with swap objects when k > 1. We also present an obstruction-free k-set agreement algorithm from n-k swap objects, which exactly matches our lower bound when k=1. Second, we show that any obstruction-free binary consensus algorithm from readable swap objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. When b is a constant, this asymptotically matches the best known obstruction-free consensus algorithms from readable swap objects with unbounded domains. Since any historyless object can be simulated by a readable swap object with the same domain, our results imply that any obstruction-free consensus algorithm from historyless objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. For b = 2, we show a slightly better lower bound of n-2. There is an obstruction-free binary consensus algorithm using 2n-1 readable swap objects with domain size 2, asymptotically matching our lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Efficient and Effective Academic Expert Finding on Heterogeneous Graphs through (k, P)-Core based Embedding.
- Author
-
YUXIANG WANG, JUN LIU, XIAOLIANG XU, XIANGYU KE, TIANXING WU, and XIAOXUAN GOU
- Subjects
COMMUNITIES ,SEMANTICS ,ALGORITHMS - Abstract
Expert finding is crucial for a wealth of applications in both academia and industry. Given a user query and trove of academic papers, expert finding aims at retrieving the most relevant experts for the query, from the academic papers. Existing studies focus on embedding-based solutions that consider academic papers’ textual semantic similarities to a query via document representation and extract the top-n experts from the most similar papers. Beyond implicit textual semantics, however, papers’ explicit relationships (e.g., co-authorship) in a heterogeneous graph (e.g., DBLP) are critical for expert finding, because they help improve the representation quality. Despite their importance, the explicit relationships of papers generally have been ignored in the literature. In this article, we study expert finding on heterogeneous graphs by considering both the explicit relationships and implicit textual semantics of papers in one model. Specifically, we define the cohesive (k, P)-core community of papers w.r.t. a meta-path P (i.e., relationship) and propose a (k, P)-core based document embedding model to enhance the representation quality. Based on this, we design a proximity graph-based index (PGIndex) of papers and present a threshold algorithm (TA)-based method to efficiently extract top-n experts from papers returned by PG-Index. We further optimize our approach in two ways: (1) we boost effectiveness by considering the (k, P)-core community of experts and the diversity of experts’ research interests, to achieve high-quality expert representation from paper representation; and (2) we streamline expert finding, going from “extract top-n experts from top-m (m > n) semantically similar papers” to “directly return top-n experts”. The process of returning a large number of top-m papers as intermediate data is avoided, thereby improving the efficiency. Extensive experiments using real-world datasets demonstrate our approach’s superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Digitalized Control Algorithm of Bridgeless Totem-Pole PFC with a Simple Control Structure Based on the Phase Angle.
- Author
-
Lee, Gi-Young, Park, Hae-Chan, Ji, Min-Woo, and Kim, Rae-Young
- Subjects
ELECTRIC current rectifiers ,ELECTRONIC paper ,PHASE-locked loops ,ALGORITHMS ,ANGLES ,VOLTAGE - Abstract
Compared to the conventional boost power factor correction (PFC) converter, a totem-pole bridgeless PFC has high efficiency because it does not have an input diode rectifier stage, but a current spike may occur when the polarity of the grid voltage changes. This paper proposes a digital control algorithm for bridgeless totem-pole PFC with a simple control structure based on the phase angle of grid voltage. The proposed algorithm has a PI-based double-loop control structure and performs DC-link voltage and input inductor current control. Rectifying switches operate based on the proposed rectification algorithm using phase angle information calculated through a single-phase phase-locked loop (PLL) to prevent current spikes. The feed-forward duty ratio value is calculated according to the polarity of the grid voltage and added to the double-loop controller to perform appropriate power factor control. The performance and feasibility of the proposed control algorithm are verified through a 3 kW hardware prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Blinded by "algo economicus": Reflecting on the assumptions of algorithmic management research to move forward.
- Author
-
Lamers, Laura, Meijerink, Jeroen, and Rettagliata, Giorgio
- Subjects
PERSONNEL management ,REFLECTION (Philosophy) ,MEDICAL research ,MATHEMATICAL models ,ECONOMIC impact ,CONCEPTUAL structures ,ONTOLOGIES (Information retrieval) ,THEORY ,ALGORITHMS ,MANAGEMENT ,ECONOMICS - Abstract
This paper reflects on the paradigmatic assumptions and ideologies that have shaped algorithmic management research. We identify two sets of assumptions: one about the "ontology of algorithms" (which holds that human resource management [HRM] algorithms are non‐human entities with material agency) and one about the "ontology of management" that HRM algorithms afford (which understands algorithmic management as a form of control for maximizing economic/shareholder value). We explain how these core assumptions underpin existing research of HRM algorithms, causing blind spots that hinder new ways of understanding and studying algorithmic management. After identifying and unpacking the assumptions and blind spots, we offer avenues to overcome these blind spots, allowing for future research based on new ideological assumption grounds that will help move algorithmic management scholarship further in significant ways. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A simple one-electron expression for electron rotational factors.
- Author
-
Qiu, Tian, Bhati, Mansi, Tao, Zhen, Bian, Xuezhi, Rawlinson, Jonathan, Littlejohn, Robert G., and Subotnik, Joseph E.
- Subjects
ELECTRONS ,ALGORITHMS ,WISHES ,MATRICES (Mathematics) - Abstract
Within the context of fewest-switch surface hopping (FSSH) dynamics, one often wishes to remove the angular component of the derivative coupling between states J and K . In a previous set of papers, Shu et al. [J. Phys. Chem. Lett. 11, 1135–1140 (2020)] posited one approach for such a removal based on direct projection, while we isolated a second approach by constructing and differentiating a rotationally invariant basis. Unfortunately, neither approach was able to demonstrate a one-electron operator O ̂ whose matrix element J O ̂ K was the angular component of the derivative coupling. Here, we show that a one-electron operator can, in fact, be constructed efficiently in a semi-local fashion. The present results yield physical insight into designing new surface hopping algorithms and are of immediate use for FSSH calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Minimising total weighted completion time for semi-online single machine scheduling with known arrivals and bounded processing times.
- Author
-
Nouinou, Hajar, Arbaoui, Taha, and Yalaoui, Alice
- Subjects
SCHEDULING ,MACHINERY ,ALGORITHMS - Abstract
This paper addresses the semi-online scheduling problem of minimising the total weighted completion time on a single machine, where a combination of information on jobs release dates and processing times is considered. In this study, jobs can only arrive at known future times and a lower bound on jobs processing times is known in advance. A new semi-online algorithm is presented and is shown to be the best possible for the considered problem. In order to make this statement, a new lower bound on the competitive ratio of any semi-online algorithm for the problem is developed and, using competitive analysis, the proposed semi-online algorithm is shown to have a competitive ratio that matches the lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Impact of learning effect modelling in flowshop scheduling with makespan minimisation based on the Nawaz-Enscore-Ham algorithm.
- Author
-
Paredes-Astudillo, Yenny Alexandra, Botta-Genoulaz, Valérie, and Montoya-Torres, Jairo R.
- Subjects
SIMULATED annealing ,PRODUCTION scheduling ,SCHEDULING ,ALGORITHMS ,SCHOOL schedules - Abstract
Inspired by real-life applications, mainly in hand-intensive manufacturing, the incorporation of learning effects into scheduling problems has garnered attention in recent years. This paper deals with the flowshop scheduling problem with a learning effect, when minimising the makespan. Four approaches to model the learning effect, well-known in the literature, are considered. Mathematical models are providing for each case. A solver allows us to find the optimal solution in small problem instances, while a Simulated Annealing algorithm is proposed to deal with large problem instances. In the latter, the initial solution is obtained using the well-known Nawaz-Enscore-Ham algorithm, and two local search operators are evaluated. Computational experiments are carried out using benchmark datasets from the literature. The Simulated Annealing algorithm shows a better result for learning approaches with fast learning effects as compared to slow learning effects. Finally, for industrial decision makers, some insights about how the learning effect model might affect the makespan minimisation flowshop scheduling problem are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
17. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ELECTRONIC paper ,ATRIAL fibrillation ,ALGORITHMS ,HEART failure ,HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Path planning algorithm for percutaneous puncture lung mass biopsy procedure based on the multi-objective constraints and fuzzy optimization.
- Author
-
Zhang, Jiayu, Zhang, Jing, Han, Ping, Chen, Xin-Zu, Zhang, Yu, Li, Wen, Qin, Jing, and He, Ling
- Subjects
OPTIMIZATION algorithms ,LUNGS ,ALGORITHMS ,COMPUTED tomography ,BIOPSY ,HUMAN fingerprints - Abstract
Objective. The percutaneous puncture lung mass biopsy procedure, which relies on preoperative CT (Computed Tomography) images, is considered the gold standard for determining the benign or malignant nature of lung masses. However, the traditional lung puncture procedure has several issues, including long operation times, a high probability of complications, and high exposure to CT radiation for the patient, as it relies heavily on the surgeon's clinical experience. Approach. To address these problems, a multi-constrained objective optimization model based on clinical criteria for the percutaneous puncture lung mass biopsy procedure has been proposed. Additionally, based on fuzzy optimization, a multidimensional spatial Pareto front algorithm has been developed for optimal path selection. The algorithm finds optimal paths, which are displayed on 3D images, and provides reference points for clinicians' surgical path planning. Main results. To evaluate the algorithm's performance, 25 data sets collected from the Second People's Hospital of Zigong were used for prospective and retrospective experiments. The results demonstrate that 92% of the optimal paths generated by the algorithm meet the clinicians' surgical needs. Significance. The algorithm proposed in this paper is innovative in the selection of mass target point, the integration of constraints based on clinical standards, and the utilization of multi-objective optimization algorithm. Comparison experiments have validated the better performance of the proposed algorithm. From a clinical standpoint, the algorithm proposed in this paper has a higher clinical feasibility of the proposed pathway than related studies, which reduces the dependency of the physician's expertise and clinical experience on pathway planning during the percutaneous puncture lung mass biopsy procedure. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. RF-KELM indoor positioning algorithm based on WiFi RSS fingerprint.
- Author
-
Hou, Bingnan and Wang, Yanchun
- Subjects
HUMAN fingerprints ,MACHINE learning ,ALGORITHMS ,FINGERPRINT databases ,SIGNAL processing ,ELECTRONIC data processing - Abstract
WiFi-based fingerprint indoor positioning technology has been widely concerned, but it has been facing the challenge of lack of robustness to signal changes, and the positioning service requires fast and accurate positioning estimation. Therefore, an random forest-kernel extreme learning machine (RF-KELM) positioning algorithm with good comprehensive performance is proposed in this paper. Both offline and online phases are included by this algorithm. In the offline phase, the original data of WiFi fingerprint is first transformed into a form more suitable for positioning. Then, access point (AP) selection is performed on the fingerprint database containing many useless APs, in which an RF which can evaluate the importance of features is used. Finally, the KELM is trained with the sub-database that have undergone data transformation and AP selection. In the online phase, firstly, the obtained signal is processed, and then the trained KELM is used to predict the position of the data processed signal. In this paper, the performance of the proposed RF-KELM positioning algorithm is thoroughly tested on a publicly available dataset, and the experimental results demonstrate that the proposed algorithm not only has high positioning accuracy and robustness, but also takes only 0.08 s to position online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Research on 3D point cloud alignment algorithm based on SHOT features.
- Author
-
Fu, Zheng, Zhang, Enzhong, Sun, Ruiyang, Zang, Jiaran, and Zhang, Wei
- Subjects
POINT cloud ,ALGORITHMS ,FEATURE extraction - Abstract
To overcome the problem of the high initial position of the point cloud required by the traditional Iterative Closest Point (ICP) algorithm, in this paper, we propose a point cloud registration method based on normal vector and directional histogram features (SHOT). Firstly, a hybrid filtering method based on the voxel idea is proposed and verified using the measured point cloud data, and the noise removal rates of 97.5%, 97.8%, and 93.8% are obtained. Secondly, in terms of feature point extraction, the original algorithm is optimized, and the optimized algorithm can better extract the missing part of the point cloud. Finally, a fine alignment method based on normal vector and directional histogram features (SHOT) is proposed, and the improved algorithm is compared with the existing algorithm. Taking the Stanford University point cloud data and the self-measured point cloud data as examples, the plotted iteration-error plots can be concluded that the improved method can reduce the number of iterations by 40.23% and 37.62%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Study on tiered storage algorithm based on heat correlation of astronomical data.
- Author
-
Ye, Xin-Chen, Zhang, Hai-Long, Wang, Jie, Zhang, Ya-Zhou, Du, Xu, Wu, Han, and Riccio, Giuseppe
- Subjects
RADIO telescopes ,GEODETIC astronomy ,PULSAR detection ,ELECTRONIC data processing ,ALGORITHMS ,CLOUD storage - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five- hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Research on fabric surface defect detection algorithm based on improved Yolo_v4.
- Author
-
Li, Yuanyuan, Song, Liyuan, Cai, Yin, Fang, Zhijun, and Tang, Ming
- Subjects
SURFACE defects ,FEATURE extraction ,ALGORITHMS ,INDUSTRIAL sites ,TEXTILES ,PROBLEM solving - Abstract
In industry, the task of defect classification and defect localization is an important part of defect detection system. However, existing studies only focus on one task and it is difficult to ensure the accuracy of both tasks. This paper proposes a defect detection system based on improved Yolo_v4, which greatly improves the detection ability of minor defects. For K_Means algorithm clustering prianchors question with strong subjectivity, the paper proposes the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to determine the number of Anchors. To solve the problem of low detection rate of small targets caused by insufficient reuse rate of low-level features in CSPDarknet53 feature extraction network, this paper proposes an ECA-DenseNet-BC-121 feature extraction network to improve it. And the Dual Channel Feature Enhancement (DCFE) module is proposed to improve the local information loss and gradient propagation obstruction caused by quad chain convolution in PANet networks to improve the robustness of the model. The experimental results on the fabric surface defect detection datasets show that the mAP of the improved Yolo_v4 is 98.97%, which is 7.67% higher than SSD, 3.75% higher than Faster_RCNN, 10.82% higher than Yolo_v4 tiny, and 5.35% higher than Yolo_v4, and the detection speed reaches 39.4 fps. It can meet the real-time monitoring needs of industrial sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Unmanned Aerial Vehicles General Aerial Person-Vehicle Recognition Based on Improved YOLOv8s Algorithm.
- Author
-
Zhijian Liu
- Subjects
DRONE aircraft ,AERIAL photography ,FEATURE extraction ,ALGORITHMS ,REMOTELY piloted vehicles - Abstract
Considering the variations in imaging sizes of the unmanned aerial vehicles (UAV) at different aerial photography heights, as well as the influence of factors such as light and weather, which can result in missed detection and false detection of the model, this paper presents a comprehensive detection model based on the improved lightweight You Only Look Once version 8s (YOLOv8s) algorithm used in natural light and infrared scenes (L_YOLO). The algorithm proposes a special feature pyramid network (SFPN) structure and substitutes most of the neck feature extraction module with the Special deformable convolution feature extraction module (SDCN). Moreover, the model undergoes pruning to eliminate redundant channels. Finally, the non-maximum suppression algorithm of intersection-union ratio based on minimum point distance (MPDIOU_NMS) algorithm has been integrated to eliminate redundant detection boxes, and a comprehensive validation has been conducted using the infrared aerial dataset and the Visdrone2019 dataset. The comprehensive experimental results demonstrate that when the number of parameters and floating-point operations is reduced by 30% and 20%, respectively, there is a 1.2% increase in mean average precision at a threshold of 0.5 (mAP(0.5)) and a 4.8% increase in mAP (0.5:0.95) on the infrared dataset. Finally, the mAP on the Visdrone2019 dataset has experienced an average increase of 12.4%. The accuracy and recall rates have seen respective increases of 9.2% and 3.6%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Face Verification Algorithms for UAV Applications: An Empirical Comparative Analysis.
- Author
-
Diez-Tomillo, Julio, Alcaraz-Calero, Jose M., and Qi Wang
- Subjects
RESCUE work ,ALGORITHMS ,PUBLIC safety ,COMPUTER vision ,PUBLIC administration ,DRONE aircraft - Abstract
Unmanned Aerial Vehicles (UAVs) are revolutionising diverse computer vision use case domains, from public safety surveillance to Search and Rescue (SAR), and other emergency management and disaster relief operations. The growing need for accurate face verification algorithms has prompted an exploration of synergies between UAVs and face verification. This promises cost-effective, wide-area, non-intrusive person verification. Real-world human-centric use cases such as a ”Drone Guard Angel” for vulnerable people can contribute to public safety management and offload significant police resources. These scenarios demand efficient face verification to distinguish correctly the end users for authentication, authorisation and customised services. This paper investigates the suitability of existing solutions, and analyses five state-of-the-art candidate face verification algorithms. Informed by the advantages and disadvantages of existing solutions, the paper proposes an extended dataset and a refined face verification pipeline. Subsequently, it conducts empirical evaluation of these algorithms using the proposed pipeline and dataset in terms of inference times and the distribution of the similarity indexes. Furthermore, this paper provides essential guidance for algorithm selection and deployment in UAV-based applications. Two candidate algorithms, ArcFace and FaceNet512, have emerged as the top performers. The choice between them will depend on the specific use case requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Visual recognition and location algorithm based on optimized YOLOv3 detector and RGB depth camera.
- Author
-
He, Bin, Qian, Shusheng, and Niu, Yongchao
- Subjects
DETECTORS ,DIAMETER ,TOMATOES ,TRACKING algorithms ,CAMERAS ,ALGORITHMS - Abstract
Fruit recognition and location are the premises of robot automatic picking. YOLOv3 has been used to detect different fruits in complex environment. However, for the object with definite features, the complex network structure will increase the computing time and may cause overfitting. Therefore, this paper has carried out a lightweight design for the YOLOv3. This paper proposed an improved T-Net to detect tomato images. Firstly, the T-Net reduces the residual network layers. This paper changed the number of cycles in each group of the residual unit to 1, 2, 2, 1, and 1. Second, two feature layers with different scales are selected according to the features of tomatoes. Meanwhile, the convolutional layer at the neck has been reduced by two layers. Finally, the location and approximate diameter of the ripe tomato are obtained by combining the node information of the Intel D435i camera and T-Net in the Robot Operation System. T-Net obtains mean average precision (mAP) of 99.2%, F
1 -score of 98.9%, precision of 99.0%, and recall of 98.8% at a detection rate of 104.2 FPS. The proposed T-Net has outperformed the YOLOv3 with 0.4%, 0.1%, and 0.2% increase in precision, mAP, and F1 -score. The detection speed of T-Net is 1.8 times faster than YOLOv3. The mean errors of the center coordinates and diameter of the tomato are 8.5 mm and 2.5 mm, respectively. This model provides a method for efficient real-time detection and location of tomatoes. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
26. Combining Improved Meanshift and Adaptive Shi-Tomasi Algorithms for a Photovoltaic Panel Segmentation Strategy.
- Author
-
Huang, Chao, Chao, Xuewei, Zhou, Weiji, and Gong, Lijiao
- Subjects
IMAGE segmentation ,ALGORITHMS - Abstract
To achieve effective and accurate segmentation of photovoltaic panels in various working contexts, this paper proposes a comprehensive image segmentation strategy that integrates an improved Meanshift algorithm and an adaptive Shi-Tomasi algorithm. This approach effectively addresses the challenge of low precision in segmenting target regions and boundary contours in routine photovoltaic panel inspection. Firstly, based on the image information of photovoltaic panels collected under different environments by cameras, an improved Meanshift algorithm based on platform histogram optimization is used for preliminary processing, and images containing target information are cut out; then, the adaptive Shi-Tomasi algorithm is used to extract and screen feature points from the target area; finally, the extracted feature points generate the segmentation contour of the target photovoltaic panel, achieving accurate segmentation of the target area and boundary contour of the photovoltaic panel. Experiments verified that in photovoltaic panel images under different background environments, the method proposed in this paper enhances the accuracy of segmenting the target area and boundary contour of photovoltaic panels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Maneuvering Decision Making Based on Cloud Modeling Algorithm for UAV Evasion–Pursuit Game.
- Author
-
Huang, Hanqiao, Weng, Weiye, Zhou, Huan, Jiang, Zijian, and Dong, Yue
- Subjects
MANEUVERING boards ,DECISION making ,DRONE aircraft ,ALGORITHMS - Abstract
When facing problems in the aerial pursuit game, most of the current unmanned aerial vehicles (UAVs) have good maneuverability performance, but it is difficult to utilize the overload maneuverability of UAVs properly; further, UAVs tend to be more costly, and it is often difficult to effectively prevent the enemy from reaching the tailgating position behind the UAV in the aerial pursuit game. Therefore, there is a pressing need for a maneuvering algorithm that can effectively allow a UAV to quickly protect itself in a disadvantageous position, stably and effectively select a maneuver with the maneuvering algorithm, and stably and effectively establish an advantage by moving to an advantageous position. Therefore, this paper establishes a cloud model-based UAV-maneuvering aerial pursuit decision-making model based on pursuit-and-evasion game positions. Based on the evaluation of the latter, when the UAV is at a disadvantage, we use the constructed defensive maneuver expert pool to abandon the disadvantageous position. When the UAV is at an advantage, we use cloud model-based pursuit-and-evasion game maneuvering decision making to establish an advantageous position. According to the results of the simulation examples, the maneuvering decision-making method designed in this paper confirms that the UAV can quickly abandon its position and establish an advantage in case of parity or disadvantage and that it can also stably establish a tail-chasing position in case of advantage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Infrared image enhancement algorithm based on detail enhancement guided image filtering.
- Author
-
Tan, Ailing, Liao, Hongping, Zhang, Bozhi, Gao, Meijing, Li, Shiyu, Bai, Yang, and Liu, Zehao
- Subjects
IMAGE intensifiers ,INFRARED imaging ,COST functions ,ENTROPY (Information theory) ,ALGORITHMS ,ENTROPY ,SIGNAL-to-noise ratio ,QUANTUM noise ,QUANTUM entropy - Abstract
Because of the unique imaging mechanism of infrared (IR) sensors, IR images commonly suffer from blurred edge details, low contrast, and poor signal-to-noise ratio. A new method is proposed in this paper to enhance IR image details so that the enhanced images can effectively inhibit image noise and improve image contrast while enhancing image details. First, for the traditional guided image filter (GIF) applied to IR image enhancement is prone to halo artifacts, this paper proposes a detail enhancement guided filter (DGIF). It mainly adds the constructed edge perception and detail regulation factors to the cost function of the GIF. Then, according to the visual characteristics of human eyes, this paper applies the detail regulation factor to the detail layer enhancement, which solves the problem of amplifying image noise using fixed gain coefficient enhancement. Finally, the enhanced detail layer is directly fused with the base layer so that the enhanced image has rich detail information. We first compare the DGIF with four guided image filters and then compare the algorithm of this paper with three traditional IR image enhancement algorithms and two IR image enhancement algorithms based on the GIF on 20 IR images. The experimental results show that the DGIF has better edge-preserving and smoothing characteristics than the four guided image filters. The mean values of quantitative evaluation of information entropy, average gradient, edge intensity, figure definition, and root-mean-square contrast of the enhanced images, respectively, achieved about 0.23%, 3.4%, 4.3%, 2.1%, and 0.17% improvement over the optimal parameter. It shows that the algorithm in this paper can effectively suppress the image noise in the detail layer while enhancing the detail information, improving the image contrast, and having a better visual effect. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. SOFTWARE DEFECT PREDICTION APPROACHES REVISITED.
- Author
-
Shebl, Khaled S., Afify, Yasmine M., and Badr, Nagwa
- Subjects
SEMANTICS ,DATABASES ,ALGORITHMS ,COMPUTER software testing ,MACHINE learning - Abstract
A crucial field in software development and testing is Software Defect Prediction (SDP) because the quality, dependability, efficiency, and cost of the software are all improved by forecasting software defects at an earlier stage. Many existing models predict defects to facilitate software testing process for testers. A comprehensive review of these models from different perspectives is crucial to help new researchers enter this field and learn about its latest developments. Algorithms, method types, datasets, and tools were the only perspectives discussed in the current literature. A comprehensive study that takes into account a wide spectrum of viewpoints hasn't yet been published. Examining the development and advancement of SDP-related studies is the goal of this literature review. It provides a comprehensive and updated state-of-the-art that satisfies all stated criteria. Out of 591 papers retrieved from 6 reputable databases, 73 papers were eligible for analysis. This review addresses relevant research questions regarding techniques & method types, data details, tools, code syntax, semantics, structural and domain information. Motivation to conduct this comprehensive review is to equip the readers with the necessary information and keep them informed about the software defect prediction domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Utilizing tables, figures, charts and graphs to enhance the readability of a research paper.
- Author
-
Divecha C. A., Tullu M. S., and Karande S.
- Subjects
GRAPHIC arts ,READABILITY (Literary style) ,SERIAL publications ,RESEARCH methodology ,COPYRIGHT ,MEDICAL research ,ALGORITHMS - Abstract
The authors offer observation on utilizing tables, figures, charts and graphs to help understand the research presented in a simple manner but also engage and sustain the reader's interest. Topics discussed include benefits provided by the use of tables/figures/charts/graphs, general methodology of design and submission, and copyright issues of using material from government publications/public domain.
- Published
- 2023
- Full Text
- View/download PDF
32. Committee-Based Blockchains as Games between Opportunistic Players and Adversaries.
- Author
-
Amoussou-Guenou, Yackolley, Biais, Bruno, Potop-Butucaru, Maria, and Tucci-Piergiovanni, Sara
- Subjects
BLOCKCHAINS ,COMMITTEES ,GAMES ,COMPUTER network protocols ,ALGORITHMS - Abstract
We study consensus in a protocol capturing in a simplified manner the major features of the majority of Proof of Stake blockchains. A committee is formed; one member proposes a block; and the others can check its validity and vote for it. Blocks with a majority of votes are produced. When an invalid block is produced, the stakes of the members who voted for it are "slashed." Profit-maximizing members interact with adversaries seeking to disrupt consensus. When slashing is limited, free-riding and moral-hazard lead to invalid blocks in equilibrium. We propose a protocol modification producing only valid blocks in equilibrium. Authors have furnished an Internet Appendix , which is available on the Oxford University Press Web site next to the link to the final published paper online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Recursive decomposition/aggregation algorithms for performance metrics calculation in multi-level assembly/disassembly production systems with exponential reliability machines.
- Author
-
Bai, Yishu and Zhang, Liang
- Subjects
RELIABILITY in engineering ,MANUFACTURING processes ,VIRTUAL machine systems ,ALGORITHMS ,MACHINERY - Abstract
Developing accurate and computationally efficient algorithms for system performance metrics calculation is critical to implementing effective control and optimization in manufacturing system operations. In this paper, we propose a recursive decomposition/aggregation-based method for calculating the performance metrics of assembly/disassembly systems with multiple merge/split operations and sub-assemblies. It is assumed that the machines follow the exponential reliability model and the buffers are of finite capacity. To achieve this, we first consider assembly systems with multiple component lines merging at a single assembly operation. By decomposing the system into a set of virtual serial lines, we derive an analytical procedure to approximate the starvation and blockage probabilities of the merge operation, which are used to recursively update the parameters of the virtual serial lines. Then, the performance metrics of the original assembly system are approximated based on the corresponding machines and buffers in these virtual serial lines. Next, we extend the algorithm to assembly/disassembly systems with multiple merge/split operations and sub-assemblies. This is accomplished by identifying the so-called assembly/disassembly units formed based on the virtual serial lines and applying the calculations derived earlier recursively. Simulation experiments are carried out to justify the convergence, computational efficiency, and approximation accuracy of the proposed algorithms. An industrial case study is presented to demonstrate the theoretical methods in practical applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. A novel automatic annotation method for whole slide pathological images combined clustering and edge detection technique.
- Author
-
Ding, Wei‐long, Liao, Wan‐yin, Zhu, Xiao‐jie, and Zhu, Hong‐bo
- Subjects
SUPERVISED learning ,DEEP learning ,ANNOTATIONS ,IMAGE processing ,ALGORITHMS ,PIXELS - Abstract
Pixel‐level labeling of regions of interest in an image is a key step in building a labeled training dataset for supervised deep learning networks of images. However, traditional manual labeling of cancerous regions in digital pathological images by doctors is time‐consuming and inefficient. To address this issue, this paper proposes an automatic labeling method for whole slide images, which combines clustering and edge detection techniques. The proposed method utilizes the multi‐level feature fusion model and the Long‐Short Term Memory network to discriminate the cancerous nature of the whole slide images, thereby improving the classification accuracy of the whole slide images. Subsequently, the automatic labeling of cancerous regions is achieved by integrating a density‐based clustering algorithm and an edge point extraction algorithm, both based on the discriminated results of the cancerous properties of whole slide images. The experimental results demonstrate the effectiveness of the proposed method, which offers an efficient and accurate solution to the challenging task of cancerous region labeling in digital pathological images. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. DESIGN OF SMART HOME SYSTEM BASED ON WIRELESS SENSOR NETWORK LINK STATUS AWARENESS ALGORITHM.
- Author
-
RONG XU
- Subjects
INTELLIGENT sensors ,WIRELESS sensor networks ,SMART homes ,DOMESTIC architecture ,ROUTING algorithms ,ALGORITHMS - Abstract
When wireless sensor networks are used in smart homes, the connection state will be unstable due to signal masking attenuation. This will cause low packet rate, high time delay and high cost in the network. In this paper, a network routing algorithm for wireless sensing based on connection conditions is designed. Secondly, the expected number of sends is proposed to evaluate the stability of links. Based on this, the following network signal delivery situation is forecasted in real time and quickly. According to the estimated expected number of transmissions, the path is dynamically corrected to effectively avoid attenuation in the channel and achieve optimal system performance. Experimental results show that the method proposed in this paper can improve the efficiency of message sending and reduce the routing cost under the condition of masking effect. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Mining research on correlation factors of residential electricity stability based on improved FP-growth algorithm.
- Author
-
Pan, Hua and Liu, Rong
- Subjects
ELECTRIC power consumption ,CONSUMPTION (Economics) ,ENERGY consumption ,ALGORITHMS ,ELECTRICITY ,CONSUMERS - Abstract
Purpose: On the one hand, this paper is to further understand the residents' differentiated power consumption behaviors and tap the residential family characteristics labels from the perspective of electricity stability. On the other hand, this paper is to address the problem of lack of causal relationship in the existing research on the association analysis of residential electricity consumption behavior and basic information data. Design/methodology/approach: First, the density-based spatial clustering of applications with noise method is used to extract the typical daily load curve of residents. Second, the degree of electricity consumption stability is described from three perspectives: daily minimum load rate, daily load rate and daily load fluctuation rate, and is evaluated comprehensively using the entropy weight method. Finally, residential customer labels are constructed from sociological characteristics, residential characteristics and energy use attitudes, and the enhanced FP-growth algorithm is employed to investigate any potential links between each factor and the stability of electricity consumption. Findings: Compared with the original FP-growth algorithm, the improved algorithm can realize the excavation of rules containing specific attribute labels, which improves the excavation efficiency. In terms of factors influencing electricity stability, characteristics such as a large number of family members, being well employed, having children in the household and newer dwelling labels may all lead to poorer electricity stability, but residents' attitudes toward energy use and dwelling type are not significantly associated with electricity stability. Originality/value: This paper aims to uncover household socioeconomic traits that influence the stability of home electricity use and to shed light on the intricate connections between them. Firstly, in this article, from the perspective of electricity stability, the characteristics of the power consumption of residents' users are refined. And the authors use the entropy weight method to comprehensively evaluate the stability of electricity usage. Secondly, the labels of residential users' household characteristics are screened and organized. Finally, the improved FP-growth algorithm is used to mine the residential household characteristic labels that are strongly associated with electricity consumption stability. Highlights: The stability of electricity consumption is important to the stable operation of the grid. An improved FP-growth algorithm is employed to explore the influencing factors. The improved algorithm enables the mining of rules containing specific attribute labels. Residents' attitudes toward energy use are largely unrelated to the stability of electricity use. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Efficient load balancing Adaptive BNBKnapsack Algorithm for Edge computing to improve performance of network.
- Author
-
Nagle, Malti and Kumar, Prakash
- Subjects
NETWORK performance ,EDGE computing ,ALGORITHMS ,LOAD balancing (Computer networks) ,ENERGY consumption ,HOSPITALS ,ROUTING algorithms - Abstract
INTRODUCTION: In present days, Automation of everything has become essential. Internet of things (IoT) play an important role among all medical advances of IT. In this paper, feasible solutions are discussed to compare and design better healthcare systems. A thorough investigation and survey of suitable approaches were done to select IoT based systems in hospitals consisting of various high precision sensors. OBJECTIVES: The challenge healthcare system face is to manage the real time patient’s data with high accuracy. Second challenge is at fog devices level to manage the load distribution to all sensors with limited availability of bandwidth. METHODS: This paper summarizes the selection criterions of suitable load balancing algorithms to reduce energy consumption and computational cost of fog devices and increase the network usage that are supposed to be used in IoT based healthcare systems. According to the survey BNBKnapack algorithm has been selected as best suitable approach to analyze the overall performance of fog devices and results are also verify the same. RESULTS: Comparative analysis of Overall performance of fog devices has been proposed with using SJF algorithm and Adaptive BNBKnapsack algorithm. It has been observed by analysing system performance, which is found as best among other load balancing algorithm Adaptive BNBKnapsack is successfully reduce the energy consumption by (99.29%), computational cost by (98.34%) and increase the network usage by (99.95%) of system CONCLUSION: It has been observed by analysing system performance, Adaptive BNBKnapsack Load balancing is successfully able to reduce the computational cost and energy consumption also increase the network usage of the fog network. The performance of the system is found best among other load balancing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A novel differential evolution algorithm with multi-population and elites regeneration.
- Author
-
Cao, Yang and Luan, Jingzheng
- Subjects
DIFFERENTIAL evolution ,EVOLUTIONARY algorithms ,DISTRIBUTION (Probability theory) ,ALGORITHMS ,GLOBAL optimization - Abstract
Differential Evolution (DE) is widely recognized as a highly effective evolutionary algorithm for global optimization. It has proven its efficacy in tackling diverse problems across various fields and real-world applications. DE boasts several advantages, such as ease of implementation, reliability, speed, and adaptability. However, DE does have certain limitations, such as suboptimal solution exploitation and challenging parameter tuning. To address these challenges, this research paper introduces a novel algorithm called Enhanced Binary JADE (EBJADE), which combines differential evolution with multi-population and elites regeneration. The primary innovation of this paper lies in the introduction of strategy with enhanced exploitation capabilities. This strategy is based on utilizing the sorting of three vectors from the current generation to perturb the target vector. By introducing directional differences, guiding the search towards improved solutions. Additionally, this study adopts a multi-population method with a rewarding subpopulation to dynamically adjust the allocation of two different mutation strategies. Finally, the paper incorporates the sampling concept of elite individuals from the Estimation of Distribution Algorithm (EDA) to regenerate new solutions through the selection process in DE. Experimental results, using the CEC2014 benchmark tests, demonstrate the strong competitiveness and superior performance of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A Hardware Implementation of the PID Algorithm Using Floating-Point Arithmetic.
- Author
-
Kulisz, Józef and Jokiel, Filip
- Subjects
FLOATING-point arithmetic ,DIGITAL signal processing ,GATE array circuits ,ALGORITHMS ,HARDWARE - Abstract
The purpose of the paper is to propose a new implementation of the PID (proportional–integral–derivative) algorithm in digital hardware. The proposed structure is optimized for cost. It follows a serialized, rather than parallel, scheme. It uses only one arithmetic block, performing the multiply-and-add operation. The calculations are carried out in a sequentially cyclic manner. The proposed circuit operates on standard single-precision (32-bit) floating-point numbers. It implements an extended PID formula, containing a non-ideal derivative component, and weighting coefficients, which enable reducing the influence of setpoint changes in the proportional and derivative components. The circuit was implemented in a Cyclone V FPGA (Field-Programmable Gate Array) device from Intel, Santa Clara, CA, USA. The proper operation of the circuit was verified in a simulation. For the specific implementation, which is reported in the paper, the sampling period of 516 ns was obtained, which means that the proposed solution is comparable in terms of speed with other hardware implementations of the PID algorithm operating on single-precision floating-point numbers. However, the presented solution is much more efficient in terms of cost. It uses 1173 LUT (Look-up Table) blocks, 1026 registers, and 1 DSP (Digital Signal Processing) block, i.e., about 30% of logic resources required by comparable solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Differentiated Security Requirements: An Exploration of Microservice Placement Algorithms in Internet of Vehicles.
- Author
-
Zhang, Xing, Liang, Jun, Lu, Yuxi, Zhang, Peiying, and Bi, Yanxian
- Subjects
REINFORCEMENT learning ,TECHNOLOGICAL innovations ,ALGORITHMS ,INTERNET ,COMPUTER software development ,INTERNET of things - Abstract
In recent years, microservices, as an emerging technology in software development, have been favored by developers due to their lightweight and low-coupling features, and have been rapidly applied to the Internet of Things (IoT) and Internet of Vehicles (IoV), etc. Microservices deployed in each unit of the IoV use wireless links to transmit data, which exposes a larger attack surface, and it is precisely because of these features that the secure and efficient placement of microservices in the environment poses a serious challenge. Improving the security of all nodes in an IoV can significantly increase the service provider's operational costs and can create security resource redundancy issues. As the application of reinforcement learning matures, it is enabling faster convergence of algorithms by designing agents, and it performs well in large-scale data environments. Inspired by this, this paper firstly models the placement network and placement behavior abstractly and sets security constraints. The environment information is fully extracted, and an asynchronous reinforcement-learning-based algorithm is designed to improve the effect of microservice placement and reduce the security redundancy based on ensuring the security requirements of microservices. The experimental results show that the algorithm proposed in this paper has good results in terms of the fit of the security index with user requirements and request acceptance rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Time–Frequency Signal Integrity Monitoring Algorithm Based on Temperature Compensation Frequency Bias Combination Model.
- Author
-
Guo, Yu, Li, Zongnan, Gong, Hang, Peng, Jing, and Ou, Gang
- Subjects
SIGNAL integrity (Electronics) ,TIME-frequency analysis ,ATOMIC clocks ,ARTIFICIAL satellites in navigation ,ALGORITHMS ,TIME measurements ,X chromosome - Abstract
To ensure the long-term stable and uninterrupted service of satellite navigation systems, the robustness and reliability of time–frequency systems are crucial. Integrity monitoring is an effective method to enhance the robustness and reliability of time–frequency systems. Time–frequency signals are fundamental for integrity monitoring, with their time differences and frequency biases serving as essential indicators. These indicators are influenced by the inherent characteristics of the time–frequency signals, as well as the links and equipment they traverse. Meanwhile, existing research primarily focuses on only monitoring the integrity of the time–frequency signals' output by the atomic clock group, neglecting the integrity monitoring of the time–frequency signals generated and distributed by the time–frequency signal generation and distribution subsystem. This paper introduces a time–frequency signal integrity monitoring algorithm based on the temperature compensation frequency bias combination model. By analyzing the characteristics of time difference measurements, constructing the temperature compensation frequency bias combination model, and extracting and monitoring noise and frequency bias features from the time difference measurements, the algorithm achieves comprehensive time–frequency signal integrity monitoring. Experimental results demonstrate that the algorithm can effectively detect, identify, and alert users to time–frequency signal faults. Additionally, the model and the integrity monitoring parameters developed in this paper exhibit high adaptability, making them directly applicable to the integrity monitoring of time–frequency signals across various links. Compared with traditional monitoring algorithms, the algorithm proposed in this paper greatly improves the effectiveness, adaptability, and real-time performance of time–frequency signal integrity monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A novel improved total variation algorithm for the elimination of scratch-type defects in high-voltage cable cross-sections.
- Author
-
Yu, Aihua, Shan, Lina, Zhu, Wen, Jie, Jing, and Hou, Beiping
- Subjects
CABLES ,COMPUTER vision ,CROSS-sectional imaging ,IMAGE intensifiers ,ALGORITHMS ,PARTIAL discharges - Abstract
In the quality inspection process of high-voltage cables, several commonly used indicators include cable length, insulation thickness, and the number of conductors within the core. Among these factors, the count of conductors holds particular significance as a key determinant of cable quality. Machine vision technology has found extensive application in automatically detecting the number of conductors in cross-sectional images of high-voltage cables. However, the presence of scratch-type defects in cut high-voltage cable cross-sections can significantly compromise the precision of conductor count detection. To address this problem, this paper introduces a novel improved total variation (TV) algorithm, marking the first-ever application of the TV algorithm in this domain. Considering the staircase effect, the direct use of the TV algorithm is prone to cause serious loss of image edge information. The proposed algorithm firstly introduces multimodal features to effectively mitigate the staircase effect. While eliminating scratch-type defects, the algorithm endeavors to preserve the original image's edge information, consequently yielding a noteworthy enhancement in detection accuracy. Furthermore, a dataset was curated, comprising images of cross-sections of high-voltage cables of varying sizes, each displaying an assortment of scratch-type defects. Experimental findings conclusively demonstrate the algorithm's exceptional efficiency in eradicating diverse scratch-type defects within high-voltage cable cross-sections. The average scratch elimination rate surpasses 90%, with an impressive 96.15% achieved on cable sample 4. A series of conducted ablation experiments in this paper substantiate a significant enhancement in cable image quality. Notably, the Edge Preservation Index (EPI) exhibits an improvement of approximately 20%, resulting in a substantial boost to conductor count detection accuracy, thus effectively enhancing the quality of high-voltage cable production. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. An Improved Sorting Algorithm for Periodic PRI Signals Based on Congruence Transform.
- Author
-
Dong, Huixu, Ge, Yuanzheng, Zhou, Rui, and Wang, Hongyan
- Subjects
WAVELET transforms ,MATHEMATICAL decoupling ,ALGORITHMS ,SIGNALS & signaling - Abstract
Recently, a signal sorting algorithm based on the congruence transform has been proposed, which is effective in dealing with the staggered Pulse Repetition Interval (PRI) signals. It can effectively sort the staggered PRI signals and obtain the sub-PRI sequence directly without sub-PRI ranking, and it is less affected by interfered pulses and pulse loss. Nevertheless, we find that the algorithm causes pseudo-peaks in the remainder histogram when sorting signals such as sliding PRI, sinusoidal PRI, etc. (collectively referred to as periodic PRI signal in this paper) and pseudo-peaks will cause errors in signal sorting. To solve the issue of pseudo-peaks when sorting periodic PRI signals, an improved sorting algorithm based on congruence transform is proposed. According to the analysis of the congruence characteristics of the periodic PRI signal, a novel method is proposed to identify pseudo-peaks based on the histogram peak amplitude and symmetric difference set. The signal sorting algorithm based on congruence transform is improved to achieve a good sorting effect on periodic PRI signals. Simulation experiments demonstrate that the novel algorithm can effectively sort periodic PRI signals and improve Precall, P
d , and Pf by 6.9%, 5.1%, and 3.2%, respectively, compared to the typical similar algorithms. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
44. A Fast Detection Algorithm for Change Detection in National Forestland "One Map" Based on NLNE Quad-Tree.
- Author
-
Gao, Fei, Su, Xiaohui, Chen, Yuling, Wu, Baoguo, Tian, Yingze, Zhang, Wenjie, and Li, Tao
- Subjects
FORESTS & forestry ,FOREST management ,GEOGRAPHIC information systems ,VECTOR data ,MOUNTAIN forests ,ALGORITHMS - Abstract
The National Forestland "One Map" applies the boundaries and attributes of sub-elements to mountain plots by means of spatial data to achieve digital management of forest resources. The change detection and analysis of forest space and property is the key to determining the change characteristics, evolution trend and management effectiveness of forest land. The existing spatial overlay method, rasterization method, object matching method, etc., cannot meet the requirements of high efficiency and high precision at the same time. In this paper, we investigate a fast algorithm for the detection of changes in "One Map", taking Sichuan Province as an example. The key spatial characteristic extraction method is used to uniquely determine the sub-compartments. We construct an unbalanced quadtree based on the number of maximum leaf node elements (NLNE Quad-Tree) to narrow down the query range of the target sub-compartments and quickly locate the sub-compartments. Based on NLNE Quad-Tree, we establish a change detection model for "One Map" (NQT-FCDM). The results show that the spatial feature combination of barycentric coordinates and area can ensure the spatial uniqueness of 44.45 million sub-compartments in Sichuan Province with 1 m~0.000001 m precision. The NQT-FCDM constructed with 1000–6000 as the maximum number of leaf nodes has the best retrieval efficiency in the range of 100,000–500,000 sub-compartments. The NQT-FCDM shortens the time by about 75% compared with the traditional spatial union analysis method, shortens the time by about 50% compared with the normal quadtree and effectively solves the problem of generating a large amount of intermediate data in the spatial union analysis method. The NQT-FCDM proposed in this paper improves the efficiency of change detection in "One Map" and can be generalized to other industries applying geographic information systems to carry out change detection, providing a basis for the detection of changes in vector spatial data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Fast Decision-Tree-Based Series Partitioning and Mode Prediction Termination Algorithm for H.266/VVC.
- Author
-
Li, Ye, He, Zhihao, and Zhang, Qiuwen
- Subjects
VIDEO compression ,VIDEO coding ,TECHNOLOGICAL innovations ,ALGORITHMS ,MULTIMEDIA systems ,PARALLEL algorithms ,COMPUTATIONAL complexity ,DECISION trees ,RANDOM forest algorithms - Abstract
With the advancement of network technology, multimedia videos have emerged as a crucial channel for individuals to access external information, owing to their realistic and intuitive effects. In the presence of high frame rate and high dynamic range videos, the coding efficiency of high-efficiency video coding (HEVC) falls short of meeting the storage and transmission demands of the video content. Therefore, versatile video coding (VVC) introduces a nested quadtree plus multi-type tree (QTMT) segmentation structure based on the HEVC standard, while also expanding the intra-prediction modes from 35 to 67. While the new technology introduced by VVC has enhanced compression performance, it concurrently introduces a higher level of computational complexity. To enhance coding efficiency and diminish computational complexity, this paper explores two key aspects: coding unit (CU) partition decision-making and intra-frame mode selection. Firstly, to address the flexible partitioning structure of QTMT, we propose a decision-tree-based series partitioning decision algorithm for partitioning decisions. Through concatenating the quadtree (QT) partition division decision with the multi-type tree (MT) division decision, a strategy is implemented to determine whether to skip the MT division decision based on texture characteristics. If the MT partition decision is used, four decision tree classifiers are used to judge different partition types. Secondly, for intra-frame mode selection, this paper proposes an ensemble-learning-based algorithm for mode prediction termination. Through the reordering of complete candidate modes and the assessment of prediction accuracy, the termination of redundant candidate modes is accomplished. Experimental results show that compared with the VVC test model (VTM), the algorithm proposed in this paper achieves an average time saving of 54.74%, while the BDBR only increases by 1.61%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Global Maximum Power Point Tracking of Photovoltaic Module Arrays Based on an Improved Intelligent Bat Algorithm.
- Author
-
Chao, Kuei-Hsiang and Bau, Thi Thanh Truc
- Subjects
MAXIMUM power point trackers ,ALGORITHMS ,CLIMATE change ,VOLTAGE - Abstract
In this paper, a method based on an improved intelligent bat algorithm (IIBA) in cooperation with a voltage and current sensor was applied in maximum power point tracking (MPPT) for a photovoltaic module array (PVMA), where the power generation performance of a PVMA was enhanced. Due to the partial shading of the PVMA from climate changes or the surrounding environment, multiple peak values were generated on the power–voltage (P-V) curve, where the conventional MPPT technology could only track the local maximum power point (LMPP), hence the reduction in output power of PVMAs. Therefore, the IIBA-based MPPT was proposed in this paper to solve such issues and to ensure the capability of a PVMA in tracking the global maximum power point (GMPP) and utilization for enhancing the output power of a PVMA. Firstly, the Matlab/Simulink software was used to establish a boost converter model that simulated the actual 4-series–3-parallel PVMA under different shaded conditions, where the P-V curve with 1-peak, 2-peak, 3-peak and 4-peak values were generated. Subsequently, the tracking paces of the conventional bat algorithm (BA) were adjusted according to the gradient of the P-V curve for a PVMA. At the same time, 0.8 times the maximum power point (MPP) voltage V
mp under standard test conditions (STCs) for a PVMA was set as the initial tracking voltage. Lastly, the simulation results proved that under different environmental impacts, the proposed IIBA led to better performances in tracking both dynamic and steady responses. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
47. Partial Discharge Signal Denoising Algorithm Based on Aquila Optimizer–Variational Mode Decomposition and K-Singular Value Decomposition.
- Author
-
Zhong, Jun, Liu, Zhenyu, and Bi, Xiaowen
- Subjects
SIGNAL denoising ,PARTIAL discharges ,HILBERT-Huang transform ,ELECTRIC insulators & insulation ,ALGORITHMS - Abstract
Partial discharge (PD) is a primary factor leading to the deterioration of insulation in electrical equipment. However, it is hard for traditional methods to precisely extract PD signals in increasingly complex engineering environments. This paper proposes a new PD signal denoising method combining Aquila Optimizer–Variational Mode Decomposition (AO-VMD) and K-Singular Value Decomposition (K-SVD) algorithms. Firstly, the AO algorithm optimizes critical parameters of the VMD algorithm. For the PD signal overwhelmed by noise, the AO-VMD algorithm can decompose it and reconstruct it by using kurtosis. In this process, the majority of the noise is removed, and the characteristics of the original signal are shown. Subsequently, the K-SVD algorithm performs sparse decomposition on the signal after OA-VMD, constructs a learned dictionary, and captures the characteristics of the signal for continuous learning and updating. After the dictionary learning is completed, the best matching atoms from the dictionary are selected to precisely reconstruct the original noiseless signal. Finally, the proposed method is compared with three traditional algorithms, Adaptive Ensemble Empirical Mode Decomposition (AEEMD), SVD-VMD, and the Adaptive Wavelet Multilevel Soft Threshold algorithm, on the simulated signal and the actual engineering signal. The results both demonstrate that the algorithm proposed by this paper has superior noise reduction and signal extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A Novel IDS with a Dynamic Access Control Algorithm to Detect and Defend Intrusion at IoT Nodes.
- Author
-
Alazab, Moutaz, Awajan, Albara, Alazzam, Hadeel, Wedyan, Mohammad, Alshawi, Bandar, and Alturki, Ryan
- Subjects
INTRUSION detection systems (Computer security) ,ACCESS control ,INTERNET of things ,ALGORITHMS ,FALSE alarms ,MATHEMATICAL analysis - Abstract
The Internet of Things (IoT) is the underlying technology that has enabled connecting daily apparatus to the Internet and enjoying the facilities of smart services. IoT marketing is experiencing an impressive 16.7% growth rate and is a nearly USD 300.3 billion market. These eye-catching figures have made it an attractive playground for cybercriminals. IoT devices are built using resource-constrained architecture to offer compact sizes and competitive prices. As a result, integrating sophisticated cybersecurity features is beyond the scope of the computational capabilities of IoT. All of these have contributed to a surge in IoT intrusion. This paper presents an LSTM-based Intrusion Detection System (IDS) with a Dynamic Access Control (DAC) algorithm that not only detects but also defends against intrusion. This novel approach has achieved an impressive 97.16% validation accuracy. Unlike most of the IDSs, the model of the proposed IDS has been selected and optimized through mathematical analysis. Additionally, it boasts the ability to identify a wider range of threats (14 to be exact) compared to other IDS solutions, translating to enhanced security. Furthermore, it has been fine-tuned to strike a balance between accurately flagging threats and minimizing false alarms. Its impressive performance metrics (precision, recall, and F1 score all hovering around 97%) showcase the potential of this innovative IDS to elevate IoT security. The proposed IDS boasts an impressive detection rate, exceeding 98%. This high accuracy instills confidence in its reliability. Furthermore, its lightning-fast response time, averaging under 1.2 s, positions it among the fastest intrusion detection systems available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
DATA management ,INTERNET of things ,NETWORK performance ,BLOCKCHAINS ,SCALABILITY ,ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
TRACKING algorithms ,FILTERS & filtration ,VIDEO surveillance ,ALGORITHMS ,IMAGE segmentation ,RESEARCH personnel ,JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.