150,847 results
Search Results
2. Socio‐technical issues in the platform‐mediated gig economy: A systematic literature review: An Annual Review of Information Science and Technology (ARIST) paper.
- Author
-
Dedema, Meredith and Rosenbaum, Howard
- Subjects
INFORMATION science ,TECHNOLOGY ,CORPORATE culture ,ALGORITHMS ,ECONOMICS - Abstract
The gig economy and gig work have grown quickly in recent years and have drawn much attention from researchers in different fields. Because the platform mediated gig economy is a relatively new phenomenon, studies have produced a range of interesting findings; of interest here are the socio‐technical issues that this work has surfaced. This systematic literature review (SLR) provides a snapshot of a range of socio‐technical issues raised in the last 12 years of literature focused on the platform mediated gig economy. Based on a sample of 515 papers gathered from nine databases in multiple disciplines, 132 were coded that specifically studied the gig economy, gig work, and gig workers. Three main socio‐technical themes were identified: (1) the digital workplace, which includes information infrastructure and digital labor that are related to the nature of gig work and the user agency; (2) algorithmic management, which includes platform governance, performance management, information asymmetry, power asymmetry, and system manipulation, relying on a diverse set of technological tools including algorithms and big data analytics; (3) ethical design, as a relevant value set that gig workers expect from the platform, which includes trust, fairness, equality, privacy, and transparency. A social informatics perspective is used to rethink the relationship between gig workers and platforms, extract the socio‐technical issues noted in prior research, and discuss the underexplored aspects of the platform mediated gig economy. The results draw attention to understudied yet critically important socio‐technical issues in the gig economy that suggest short‐ and long‐term opportunities for future research directions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. 基于多目标优化的联邦学习进化.
- Author
-
胡智勇, 于千城, 王之赐, and 张丽丝
- Subjects
FEDERATED learning ,ALGORITHMS ,PRIVACY - Abstract
Copyright of Application Research of Computers / Jisuanji Yingyong Yanjiu is the property of Application Research of Computers Edition and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
4. Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits.
- Author
-
Limaye, Nutan, Srinivasan, Srikanth, and Tavenas, Sébastien
- Subjects
ALGEBRA ,POLYNOMIALS ,CIRCUIT complexity ,ALGORITHMS ,DIRECTED acyclic graphs ,LOGIC circuits - Abstract
An Algebraic Circuit for a multivariate polynomial P is a computational model for constructing the polynomial P using only additions and multiplications. It is a syntactic model of computation, as opposed to the Boolean Circuit model, and hence lower bounds for this model are widely expected to be easier to prove than lower bounds for Boolean circuits. Despite this, we do not have superpolynomial lower bounds against general algebraic circuits of depth 3 (except over constant-sized finite fields) and depth 4 (over any field other than F
2 ), while constant-depth Boolean circuit lower bounds have been known since the early 1980s. In this paper, we prove the first superpolynomial lower bounds against algebraic circuits of all constant depths over all fields of characteristic 0. We also observe that our super-polynomial lower bound for constant-depth circuits implies the first deterministic sub-exponential time algorithm for solving the Polynomial Identity Testing (PIT) problem for all small-depth circuits using the known connection between algebraic hardness and randomness. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
5. The Space Complexity of Consensus from Swap.
- Author
-
Ovens, Sean
- Subjects
ALGORITHMS ,GENERALIZATION - Abstract
Nearly thirty years ago, it was shown that \(\Omega (\sqrt {n})\) read/write registers are needed to solve randomized wait-free consensus among n processes. This lower bound was improved to n registers in 2018, which exactly matches known algorithms. The \(\Omega (\sqrt {n})\) space complexity lower bound actually applies to a class of objects called historyless objects, which includes registers, test-and-set objects, and readable swap objects. However, every known n-process obstruction-free consensus algorithm from historyless objects uses Ω (n) objects. In this paper, we give the first Ω (n) space complexity lower bounds on consensus algorithms for two kinds of historyless objects. First, we show that any obstruction-free consensus algorithm from swap objects uses at least n-1 objects. More generally, we prove that any obstruction-free k-set agreement algorithm from swap objects uses at least \(\lceil \frac{n}{k}\rceil - 1\) objects. The k-set agreement problem is a generalization of consensus in which processes agree on no more than k different output values. This is the first non-constant lower bound on the space complexity of solving k-set agreement with swap objects when k > 1. We also present an obstruction-free k-set agreement algorithm from n-k swap objects, which exactly matches our lower bound when k=1. Second, we show that any obstruction-free binary consensus algorithm from readable swap objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. When b is a constant, this asymptotically matches the best known obstruction-free consensus algorithms from readable swap objects with unbounded domains. Since any historyless object can be simulated by a readable swap object with the same domain, our results imply that any obstruction-free consensus algorithm from historyless objects with domain size b uses at least \(\frac{n-2}{3b+1}\) objects. For b = 2, we show a slightly better lower bound of n-2. There is an obstruction-free binary consensus algorithm using 2n-1 readable swap objects with domain size 2, asymptotically matching our lower bound. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Digitalized Control Algorithm of Bridgeless Totem-Pole PFC with a Simple Control Structure Based on the Phase Angle.
- Author
-
Lee, Gi-Young, Park, Hae-Chan, Ji, Min-Woo, and Kim, Rae-Young
- Subjects
ELECTRIC current rectifiers ,ELECTRONIC paper ,PHASE-locked loops ,ALGORITHMS ,ANGLES ,VOLTAGE - Abstract
Compared to the conventional boost power factor correction (PFC) converter, a totem-pole bridgeless PFC has high efficiency because it does not have an input diode rectifier stage, but a current spike may occur when the polarity of the grid voltage changes. This paper proposes a digital control algorithm for bridgeless totem-pole PFC with a simple control structure based on the phase angle of grid voltage. The proposed algorithm has a PI-based double-loop control structure and performs DC-link voltage and input inductor current control. Rectifying switches operate based on the proposed rectification algorithm using phase angle information calculated through a single-phase phase-locked loop (PLL) to prevent current spikes. The feed-forward duty ratio value is calculated according to the polarity of the grid voltage and added to the double-loop controller to perform appropriate power factor control. The performance and feasibility of the proposed control algorithm are verified through a 3 kW hardware prototype. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Performance Evaluation of the Extractive Methods in Automatic Text Summarization Using Medical Papers.
- Author
-
Kus, Anil and Aci, Cigdem Inan
- Subjects
PERFORMANCE evaluation ,TEXT summarization ,MEDICAL sciences ,ALGORITHMS ,SEMANTICS - Abstract
Copyright of Gazi Journal of Engineering Sciences (GJES) / Gazi Mühendislik Bilimleri Dergisi is the property of Gazi Journal of Engineering Sciences and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
8. Special issue "Discrete optimization: Theory, algorithms and new applications".
- Author
-
Werner, Frank
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,ONLINE algorithms ,LINEAR matrix inequalities ,ALGORITHMS ,ROBUST stability analysis ,NONLINEAR integral equations - Abstract
This document is an editorial for a special issue of the journal AIMS Mathematics on the topic of discrete optimization. The issue includes 21 papers covering a range of subjects, including molecular trees, network systems, variational inequality problems, scheduling, image restoration, spectral clustering, integral equations, convex functions, graph products, optimization algorithms, air quality prediction, humanitarian planning, inertial methods, neural networks, transportation problems, emotion identification, fixed-point problems, structural engineering design, single machine scheduling, and ensemble learning. The papers present new theoretical results, algorithms, and applications in these areas. The guest editor expresses gratitude to the journal staff and reviewers and hopes that readers will find inspiration for their own research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
9. Taming Algorithmic Priority Inversion in Mission-Critical Perception Pipelines.
- Author
-
Liu, Shengzhong, Yao, Shuochao, Fu, Xinzhe, Tabish, Rohan, Yu, Simon, Bansal, Ayoosh, Yun, Heechul, Sha, Lui, and Abdelzaher, Tarek
- Subjects
ALGORITHMS ,SYSTEMS design ,CYBER physical systems ,COMPUTER scheduling ,ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,FIRST in, first out (Queuing theory) - Abstract
The paper discusses algorithmic priority inversion in mission-critical machine inference pipelines used in modern neural-network-based perception subsystems and describes a solution to mitigate its effect. In general, priority inversion occurs in computing systems when computations that are "less important" are performed together with or ahead of those that are "more important." Significant priority inversion occurs in existing machine inference pipelines when they do not differentiate between critical and less critical data. We describe a framework to resolve this problem and demonstrate that it improves a perception system's ability to react to critical inputs, while at the same time reducing platform cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A review paper of optimal resource allocation algorithm in cloud environment.
- Author
-
Patadiya, Namrata and Bhatt, Nirav
- Subjects
RESOURCE allocation ,LITERATURE reviews ,SERVICE level agreements ,ALGORITHMS ,ELECTRONIC data processing ,CLOUD computing - Abstract
Cloud computing has become a popular approach for processing data and running computationally expensive services on a pay-as-you-go basis. Due to the ever-increasing requirement for cloud-based apps, appropriately allocating resources according to user requests while meeting service-level agreements between customers and service providers has become increasingly complex. An efficient and versatile resource allocation method is required to properly deploy these assets and meet user needs. The technique of distributing resources has become more arduous as user demand has increased. One of the key areas of research experts is how to design optimal solutions for this approach. In this paper, a literature review on proposed dynamic resource allocation approaches is introduced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Efficient and Effective Academic Expert Finding on Heterogeneous Graphs through (k, P)-Core based Embedding.
- Author
-
YUXIANG WANG, JUN LIU, XIAOLIANG XU, XIANGYU KE, TIANXING WU, and XIAOXUAN GOU
- Subjects
COMMUNITIES ,SEMANTICS ,ALGORITHMS - Abstract
Expert finding is crucial for a wealth of applications in both academia and industry. Given a user query and trove of academic papers, expert finding aims at retrieving the most relevant experts for the query, from the academic papers. Existing studies focus on embedding-based solutions that consider academic papers’ textual semantic similarities to a query via document representation and extract the top-n experts from the most similar papers. Beyond implicit textual semantics, however, papers’ explicit relationships (e.g., co-authorship) in a heterogeneous graph (e.g., DBLP) are critical for expert finding, because they help improve the representation quality. Despite their importance, the explicit relationships of papers generally have been ignored in the literature. In this article, we study expert finding on heterogeneous graphs by considering both the explicit relationships and implicit textual semantics of papers in one model. Specifically, we define the cohesive (k, P)-core community of papers w.r.t. a meta-path P (i.e., relationship) and propose a (k, P)-core based document embedding model to enhance the representation quality. Based on this, we design a proximity graph-based index (PGIndex) of papers and present a threshold algorithm (TA)-based method to efficiently extract top-n experts from papers returned by PG-Index. We further optimize our approach in two ways: (1) we boost effectiveness by considering the (k, P)-core community of experts and the diversity of experts’ research interests, to achieve high-quality expert representation from paper representation; and (2) we streamline expert finding, going from “extract top-n experts from top-m (m > n) semantically similar papers” to “directly return top-n experts”. The process of returning a large number of top-m papers as intermediate data is avoided, thereby improving the efficiency. Extensive experiments using real-world datasets demonstrate our approach’s superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. A simple one-electron expression for electron rotational factors.
- Author
-
Qiu, Tian, Bhati, Mansi, Tao, Zhen, Bian, Xuezhi, Rawlinson, Jonathan, Littlejohn, Robert G., and Subotnik, Joseph E.
- Subjects
ELECTRONS ,ALGORITHMS ,WISHES ,MATRICES (Mathematics) - Abstract
Within the context of fewest-switch surface hopping (FSSH) dynamics, one often wishes to remove the angular component of the derivative coupling between states J and K . In a previous set of papers, Shu et al. [J. Phys. Chem. Lett. 11, 1135–1140 (2020)] posited one approach for such a removal based on direct projection, while we isolated a second approach by constructing and differentiating a rotationally invariant basis. Unfortunately, neither approach was able to demonstrate a one-electron operator O ̂ whose matrix element J O ̂ K was the angular component of the derivative coupling. Here, we show that a one-electron operator can, in fact, be constructed efficiently in a semi-local fashion. The present results yield physical insight into designing new surface hopping algorithms and are of immediate use for FSSH calculations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Committee-Based Blockchains as Games between Opportunistic Players and Adversaries.
- Author
-
Amoussou-Guenou, Yackolley, Biais, Bruno, Potop-Butucaru, Maria, and Tucci-Piergiovanni, Sara
- Subjects
BLOCKCHAINS ,COMMITTEES ,GAMES ,COMPUTER network protocols ,ALGORITHMS - Abstract
We study consensus in a protocol capturing in a simplified manner the major features of the majority of Proof of Stake blockchains. A committee is formed; one member proposes a block; and the others can check its validity and vote for it. Blocks with a majority of votes are produced. When an invalid block is produced, the stakes of the members who voted for it are "slashed." Profit-maximizing members interact with adversaries seeking to disrupt consensus. When slashing is limited, free-riding and moral-hazard lead to invalid blocks in equilibrium. We propose a protocol modification producing only valid blocks in equilibrium. Authors have furnished an Internet Appendix , which is available on the Oxford University Press Web site next to the link to the final published paper online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Determining the Moho topography using an improved inversion algorithm: a case study from the South China Sea.
- Author
-
Zhang, Hui, Yu, Hangtao, Xu, Chuang, Li, Rui, Bie, Lu, He, Qingyin, Liu, Yiqi, Lu, Jinsong, Xiao, Yinan, Lyu, Yang, Eldosouky, Ahmed M., and Loureiro, Afonso
- Subjects
MOHOROVICIC discontinuity ,OPTIMIZATION algorithms ,TOPOGRAPHY ,ALGORITHMS - Abstract
The Parker-Oldenburg method, as a classical frequency-domain algorithm, has been widely used in Moho topographic inversion. The method has two indispensable hyperparameters, which are the Moho density contrast and the average Moho depth. Accurate hyperparameters are important prerequisites for inversion of fine Moho topography. However, limited by the nonlinear terms, the hyperparameters estimated by previous methods have obvious deviations. For this reason, this paper proposes a new method to improve the existing ParkerOldenburg method by taking advantage of the invasive weed optimization algorithm in estimating hyperparameters. The synthetic test results of the new method show that, compared with the trial and error method and the linear regression method, the new method estimates the hyperparameters more accurately, and the computational efficiency performs excellently, which lays the foundation for the inversion of more accurate Moho topography. In practice, the method is applied to the Moho topographic inversion in the South China Sea. With the constraints of available seismic data, the crust-mantle density contrast and the average Moho depth in the South China Sea are determined to be 0.535 g/cm
3 and 21.63 km, respectively, and the Moho topography of the South China Sea is inverted based on this. The results of the Moho topography show that the Moho depth in the study area ranges from 5.7 km to 32.3 km, with more obvious undulations. Among them, the shallowest part of the Moho topography is mainly located in the southern part of the Southwestern sub-basin and the southern part of the Manila Trench, with a depth of about 6 km. Compared with the CRUST 1.0 model and the model calculated by the improved Bott's method, the RMS between the Moho model and the seismic point difference in this paper is smaller, which proves that the method in this paper has some advantages in Moho topographic inversion. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
15. A fully-automated paper ECG digitisation algorithm using deep learning.
- Author
-
Wu, Huiyi, Patel, Kiran Haresh Kumar, Li, Xinyang, Zhang, Bowen, Galazis, Christoforos, Bajaj, Nikesh, Sau, Arunashis, Shi, Xili, Sun, Lin, Tao, Yanda, Al-Qaysi, Harith, Tarusan, Lawrence, Yasmin, Najira, Grewal, Natasha, Kapoor, Gaurika, Waks, Jonathan W., Kramer, Daniel B., Peters, Nicholas S., and Ng, Fu Siong
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ELECTRONIC paper ,ATRIAL fibrillation ,ALGORITHMS ,HEART failure ,HEART rate monitors - Abstract
There is increasing focus on applying deep learning methods to electrocardiograms (ECGs), with recent studies showing that neural networks (NNs) can predict future heart failure or atrial fibrillation from the ECG alone. However, large numbers of ECGs are needed to train NNs, and many ECGs are currently only in paper format, which are not suitable for NN training. We developed a fully-automated online ECG digitisation tool to convert scanned paper ECGs into digital signals. Using automated horizontal and vertical anchor point detection, the algorithm automatically segments the ECG image into separate images for the 12 leads and a dynamical morphological algorithm is then applied to extract the signal of interest. We then validated the performance of the algorithm on 515 digital ECGs, of which 45 were printed, scanned and redigitised. The automated digitisation tool achieved 99.0% correlation between the digitised signals and the ground truth ECG (n = 515 standard 3-by-4 ECGs) after excluding ECGs with overlap of lead signals. Without exclusion, the performance of average correlation was from 90 to 97% across the leads on all 3-by-4 ECGs. There was a 97% correlation for 12-by-1 and 3-by-1 ECG formats after excluding ECGs with overlap of lead signals. Without exclusion, the average correlation of some leads in 12-by-1 ECGs was 60–70% and the average correlation of 3-by-1 ECGs achieved 80–90%. ECGs that were printed, scanned, and redigitised, our tool achieved 96% correlation with the original signals. We have developed and validated a fully-automated, user-friendly, online ECG digitisation tool. Unlike other available tools, this does not require any manual segmentation of ECG signals. Our tool can facilitate the rapid and automated digitisation of large repositories of paper ECGs to allow them to be used for deep learning projects. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Explainable Rules and Heuristics in AI Algorithm Recommendation Approaches--A Systematic Literature Review and Mapping Study.
- Author
-
García-Peñalvo, Francisco José, Vázquez-Ingelmo, Andrea, and García-Holgado, Alicia
- Subjects
ARTIFICIAL intelligence ,LITERATURE reviews ,SOFTWARE engineering ,ALGORITHMS ,HEURISTIC ,SOFTWARE engineers - Abstract
The exponential use of artificial intelligence (AI) to solve and automated complex tasks has catapulted its popularity generating some challenges that need to be addressed. While AI is a powerful means to discover interesting patterns and obtain predictive models, the use of these algorithms comes with a great responsibility, as an incomplete or unbalanced set of training data or an unproper interpretation of the models' outcomes could result in misleading conclusions that ultimately could become very dangerous. For these reasons, it is important to rely on expert knowledge when applying these methods. However, not every user can count on this specific expertise; non-AI-expert users could also benefit from applying these powerful algorithms to their domain problems, but they need basic guidelines to obtain the most out of AI models. The goal of this work is to present a systematic review of the literature to analyze studies whose outcomes are explainable rules and heuristics to select suitable AI algorithms given a set of input features. The systematic review follows the methodology proposed by Kitchenham and other authors in the field of software engineering. As a result, 9 papers that tackle AI algorithm recommendation through tangible and traceable rules and heuristics were collected. The reduced number of retrieved papers suggests a lack of reporting explicit rules and heuristics when testing the suitability and performance of AI algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. RF-KELM indoor positioning algorithm based on WiFi RSS fingerprint.
- Author
-
Hou, Bingnan and Wang, Yanchun
- Subjects
HUMAN fingerprints ,MACHINE learning ,ALGORITHMS ,FINGERPRINT databases ,SIGNAL processing ,ELECTRONIC data processing - Abstract
WiFi-based fingerprint indoor positioning technology has been widely concerned, but it has been facing the challenge of lack of robustness to signal changes, and the positioning service requires fast and accurate positioning estimation. Therefore, an random forest-kernel extreme learning machine (RF-KELM) positioning algorithm with good comprehensive performance is proposed in this paper. Both offline and online phases are included by this algorithm. In the offline phase, the original data of WiFi fingerprint is first transformed into a form more suitable for positioning. Then, access point (AP) selection is performed on the fingerprint database containing many useless APs, in which an RF which can evaluate the importance of features is used. Finally, the KELM is trained with the sub-database that have undergone data transformation and AP selection. In the online phase, firstly, the obtained signal is processed, and then the trained KELM is used to predict the position of the data processed signal. In this paper, the performance of the proposed RF-KELM positioning algorithm is thoroughly tested on a publicly available dataset, and the experimental results demonstrate that the proposed algorithm not only has high positioning accuracy and robustness, but also takes only 0.08 s to position online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Research on 3D point cloud alignment algorithm based on SHOT features.
- Author
-
Fu, Zheng, Zhang, Enzhong, Sun, Ruiyang, Zang, Jiaran, and Zhang, Wei
- Subjects
POINT cloud ,ALGORITHMS ,FEATURE extraction - Abstract
To overcome the problem of the high initial position of the point cloud required by the traditional Iterative Closest Point (ICP) algorithm, in this paper, we propose a point cloud registration method based on normal vector and directional histogram features (SHOT). Firstly, a hybrid filtering method based on the voxel idea is proposed and verified using the measured point cloud data, and the noise removal rates of 97.5%, 97.8%, and 93.8% are obtained. Secondly, in terms of feature point extraction, the original algorithm is optimized, and the optimized algorithm can better extract the missing part of the point cloud. Finally, a fine alignment method based on normal vector and directional histogram features (SHOT) is proposed, and the improved algorithm is compared with the existing algorithm. Taking the Stanford University point cloud data and the self-measured point cloud data as examples, the plotted iteration-error plots can be concluded that the improved method can reduce the number of iterations by 40.23% and 37.62%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Study on tiered storage algorithm based on heat correlation of astronomical data.
- Author
-
Ye, Xin-Chen, Zhang, Hai-Long, Wang, Jie, Zhang, Ya-Zhou, Du, Xu, Wu, Han, and Riccio, Giuseppe
- Subjects
RADIO telescopes ,GEODETIC astronomy ,PULSAR detection ,ELECTRONIC data processing ,ALGORITHMS ,CLOUD storage - Abstract
With the surge in astronomical data volume, modern astronomical research faces significant challenges in data storage, processing, and access. The I/O bottleneck issue in astronomical data processing is particularly prominent, limiting the efficiency of data processing. To address this issue, this paper proposes a tiered storage algorithm based on the access characteristics of astronomical data. The C4.5 decision tree algorithm is employed as the foundation to implement an astronomical data access correlation algorithm. Additionally, a data copy migration strategy is designed based on tiered storage technology to achieve efficient data access. Preprocessing tests were conducted on 418GB NSRT (Nanshan Radio Telescope) formaldehyde spectral line data, showcasing that tiered storage can potentially reduce data processing time by up to 38.15%. Similarly, utilizing 802.2 GB data from FAST (Five- hundred-meter Aperture Spherical radio Telescope) observations for pulsar search data processing tests, the tiered storage approach demonstrated a maximum reduction of 29.00% in data processing time. In concurrent testing of data processing workflows, the proposed astronomical data heat correlation algorithm in this paper achieved an average reduction of 17.78% in data processing time compared to centralized storage. Furthermore, in comparison to traditional heat algorithms, it reduced data processing time by 5.15%. The effectiveness of the proposed algorithm is positively correlated with the associativity between the algorithm and the processed data. The tiered storage algorithm based on the characteristics of astronomical data proposed in this paper is poised to provide algorithmic references for large-scale data processing in the field of astronomy in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Research on fabric surface defect detection algorithm based on improved Yolo_v4.
- Author
-
Li, Yuanyuan, Song, Liyuan, Cai, Yin, Fang, Zhijun, and Tang, Ming
- Subjects
SURFACE defects ,FEATURE extraction ,ALGORITHMS ,INDUSTRIAL sites ,TEXTILES ,PROBLEM solving - Abstract
In industry, the task of defect classification and defect localization is an important part of defect detection system. However, existing studies only focus on one task and it is difficult to ensure the accuracy of both tasks. This paper proposes a defect detection system based on improved Yolo_v4, which greatly improves the detection ability of minor defects. For K_Means algorithm clustering prianchors question with strong subjectivity, the paper proposes the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm to determine the number of Anchors. To solve the problem of low detection rate of small targets caused by insufficient reuse rate of low-level features in CSPDarknet53 feature extraction network, this paper proposes an ECA-DenseNet-BC-121 feature extraction network to improve it. And the Dual Channel Feature Enhancement (DCFE) module is proposed to improve the local information loss and gradient propagation obstruction caused by quad chain convolution in PANet networks to improve the robustness of the model. The experimental results on the fabric surface defect detection datasets show that the mAP of the improved Yolo_v4 is 98.97%, which is 7.67% higher than SSD, 3.75% higher than Faster_RCNN, 10.82% higher than Yolo_v4 tiny, and 5.35% higher than Yolo_v4, and the detection speed reaches 39.4 fps. It can meet the real-time monitoring needs of industrial sites. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Unmanned Aerial Vehicles General Aerial Person-Vehicle Recognition Based on Improved YOLOv8s Algorithm.
- Author
-
Zhijian Liu
- Subjects
DRONE aircraft ,AERIAL photography ,FEATURE extraction ,ALGORITHMS ,REMOTELY piloted vehicles - Abstract
Considering the variations in imaging sizes of the unmanned aerial vehicles (UAV) at different aerial photography heights, as well as the influence of factors such as light and weather, which can result in missed detection and false detection of the model, this paper presents a comprehensive detection model based on the improved lightweight You Only Look Once version 8s (YOLOv8s) algorithm used in natural light and infrared scenes (L_YOLO). The algorithm proposes a special feature pyramid network (SFPN) structure and substitutes most of the neck feature extraction module with the Special deformable convolution feature extraction module (SDCN). Moreover, the model undergoes pruning to eliminate redundant channels. Finally, the non-maximum suppression algorithm of intersection-union ratio based on minimum point distance (MPDIOU_NMS) algorithm has been integrated to eliminate redundant detection boxes, and a comprehensive validation has been conducted using the infrared aerial dataset and the Visdrone2019 dataset. The comprehensive experimental results demonstrate that when the number of parameters and floating-point operations is reduced by 30% and 20%, respectively, there is a 1.2% increase in mean average precision at a threshold of 0.5 (mAP(0.5)) and a 4.8% increase in mAP (0.5:0.95) on the infrared dataset. Finally, the mAP on the Visdrone2019 dataset has experienced an average increase of 12.4%. The accuracy and recall rates have seen respective increases of 9.2% and 3.6%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Face Verification Algorithms for UAV Applications: An Empirical Comparative Analysis.
- Author
-
Diez-Tomillo, Julio, Alcaraz-Calero, Jose M., and Qi Wang
- Subjects
RESCUE work ,ALGORITHMS ,PUBLIC safety ,COMPUTER vision ,PUBLIC administration ,DRONE aircraft - Abstract
Unmanned Aerial Vehicles (UAVs) are revolutionising diverse computer vision use case domains, from public safety surveillance to Search and Rescue (SAR), and other emergency management and disaster relief operations. The growing need for accurate face verification algorithms has prompted an exploration of synergies between UAVs and face verification. This promises cost-effective, wide-area, non-intrusive person verification. Real-world human-centric use cases such as a ”Drone Guard Angel” for vulnerable people can contribute to public safety management and offload significant police resources. These scenarios demand efficient face verification to distinguish correctly the end users for authentication, authorisation and customised services. This paper investigates the suitability of existing solutions, and analyses five state-of-the-art candidate face verification algorithms. Informed by the advantages and disadvantages of existing solutions, the paper proposes an extended dataset and a refined face verification pipeline. Subsequently, it conducts empirical evaluation of these algorithms using the proposed pipeline and dataset in terms of inference times and the distribution of the similarity indexes. Furthermore, this paper provides essential guidance for algorithm selection and deployment in UAV-based applications. Two candidate algorithms, ArcFace and FaceNet512, have emerged as the top performers. The choice between them will depend on the specific use case requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Visual recognition and location algorithm based on optimized YOLOv3 detector and RGB depth camera.
- Author
-
He, Bin, Qian, Shusheng, and Niu, Yongchao
- Subjects
DETECTORS ,DIAMETER ,TOMATOES ,TRACKING algorithms ,CAMERAS ,ALGORITHMS - Abstract
Fruit recognition and location are the premises of robot automatic picking. YOLOv3 has been used to detect different fruits in complex environment. However, for the object with definite features, the complex network structure will increase the computing time and may cause overfitting. Therefore, this paper has carried out a lightweight design for the YOLOv3. This paper proposed an improved T-Net to detect tomato images. Firstly, the T-Net reduces the residual network layers. This paper changed the number of cycles in each group of the residual unit to 1, 2, 2, 1, and 1. Second, two feature layers with different scales are selected according to the features of tomatoes. Meanwhile, the convolutional layer at the neck has been reduced by two layers. Finally, the location and approximate diameter of the ripe tomato are obtained by combining the node information of the Intel D435i camera and T-Net in the Robot Operation System. T-Net obtains mean average precision (mAP) of 99.2%, F
1 -score of 98.9%, precision of 99.0%, and recall of 98.8% at a detection rate of 104.2 FPS. The proposed T-Net has outperformed the YOLOv3 with 0.4%, 0.1%, and 0.2% increase in precision, mAP, and F1 -score. The detection speed of T-Net is 1.8 times faster than YOLOv3. The mean errors of the center coordinates and diameter of the tomato are 8.5 mm and 2.5 mm, respectively. This model provides a method for efficient real-time detection and location of tomatoes. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
24. Combining Improved Meanshift and Adaptive Shi-Tomasi Algorithms for a Photovoltaic Panel Segmentation Strategy.
- Author
-
Huang, Chao, Chao, Xuewei, Zhou, Weiji, and Gong, Lijiao
- Subjects
IMAGE segmentation ,ALGORITHMS - Abstract
To achieve effective and accurate segmentation of photovoltaic panels in various working contexts, this paper proposes a comprehensive image segmentation strategy that integrates an improved Meanshift algorithm and an adaptive Shi-Tomasi algorithm. This approach effectively addresses the challenge of low precision in segmenting target regions and boundary contours in routine photovoltaic panel inspection. Firstly, based on the image information of photovoltaic panels collected under different environments by cameras, an improved Meanshift algorithm based on platform histogram optimization is used for preliminary processing, and images containing target information are cut out; then, the adaptive Shi-Tomasi algorithm is used to extract and screen feature points from the target area; finally, the extracted feature points generate the segmentation contour of the target photovoltaic panel, achieving accurate segmentation of the target area and boundary contour of the photovoltaic panel. Experiments verified that in photovoltaic panel images under different background environments, the method proposed in this paper enhances the accuracy of segmenting the target area and boundary contour of photovoltaic panels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Maneuvering Decision Making Based on Cloud Modeling Algorithm for UAV Evasion–Pursuit Game.
- Author
-
Huang, Hanqiao, Weng, Weiye, Zhou, Huan, Jiang, Zijian, and Dong, Yue
- Subjects
MANEUVERING boards ,DECISION making ,DRONE aircraft ,ALGORITHMS - Abstract
When facing problems in the aerial pursuit game, most of the current unmanned aerial vehicles (UAVs) have good maneuverability performance, but it is difficult to utilize the overload maneuverability of UAVs properly; further, UAVs tend to be more costly, and it is often difficult to effectively prevent the enemy from reaching the tailgating position behind the UAV in the aerial pursuit game. Therefore, there is a pressing need for a maneuvering algorithm that can effectively allow a UAV to quickly protect itself in a disadvantageous position, stably and effectively select a maneuver with the maneuvering algorithm, and stably and effectively establish an advantage by moving to an advantageous position. Therefore, this paper establishes a cloud model-based UAV-maneuvering aerial pursuit decision-making model based on pursuit-and-evasion game positions. Based on the evaluation of the latter, when the UAV is at a disadvantage, we use the constructed defensive maneuver expert pool to abandon the disadvantageous position. When the UAV is at an advantage, we use cloud model-based pursuit-and-evasion game maneuvering decision making to establish an advantageous position. According to the results of the simulation examples, the maneuvering decision-making method designed in this paper confirms that the UAV can quickly abandon its position and establish an advantage in case of parity or disadvantage and that it can also stably establish a tail-chasing position in case of advantage. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Fast Extraction Algorithm of Planar Targets Based on Point Cloud Data for Monitoring the Synchronization of Bridge Jacking Displacements.
- Author
-
Liang, Dong, Zhang, Zeyu, Zhang, Qiang, Wu, Erpeng, and Huang, Haibin
- Subjects
POINT cloud ,SYNCHRONIZATION ,CLOUD storage ,ALGORITHMS ,BRIDGES ,STRUCTURAL health monitoring - Abstract
Transverse synchronization of vertical displacements of all jacking-up points is an important monitoring indicator to replace bearings in assembled multigirder bridges during the jacking phase. Currently, using target paper to identify the 3D coordinates of control points reduces the complexity of monitoring operations and improves the stability of data precision. However, the existing planar target locating methods have low accuracy, inefficiency, and subjectivity, which seriously hinders the construction process of bearing replacement. Accurately obtaining the center coordinates of multiple targets from a point cloud in a short monitoring period remains a challenge. This study proposes a high-precision automated algorithm to extract target center points in low-density point clouds to quickly calculate real target center points. First, we construct a standard point cloud model of the target papers for scanning, including color and geometric features. Then, we extract the measured point cloud of the typical jacking operation phase based on the reflection intensity and size information. Next, we map the intensity values of the measured point cloud into the color channel and register the measured point cloud with its standard point cloud model using the normal vector estimation and colored ICP algorithms. Finally, we extract the center point of the measured targets. Numerical experiments and engineering test results show that the proposed method converges quickly with high precision and good robustness, which saves 91.4% of the time compared with the traditional method. In general, this research can provide effective technical support for 3D laser scanning in monitoring the operation phase of bridge jacking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A lightweight license plate detection algorithm based on deep learning.
- Author
-
Zhu, Shuo, Wang, Yu, and Wang, Zongyang
- Subjects
AUTOMOBILE license plates ,DEEP learning ,INTELLIGENT transportation systems ,TRAFFIC engineering ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
License plate detection is an important task in Intelligent Transportation Systems (ITS) and has a wide range of applications in vehicle management, traffic control, and public safety. In order to improve the accuracy and speed of mobile recognition, an improved lightweight YOLOv5s model is proposed for license plate detection. First, an improved Stemblock network is used to replace the original Focus layer in the network, which ensures strong feature expression capability and reduces a large number of parameters to lower the computational complexity; then, an improved lightweight network, ShuffleNetv2, is used to replace the backbone network of the YOLOv5s, which makes the model lighter and ensures the detection accuracy at the same time. Then, a feature enhancement module is designed to reduce the information loss caused by the rearrangement of the backbone network channels, which facilitates the information interaction in the feature fusion process; finally, the low‐, medium‐ and high‐level features in the Shufflenetv2 network structure are fused to form the final high‐level output features. Experimental results on the CCPD dataset show that compared to other methods this paper obtains better performance and faster speed in the license plate detection task, in which the average precision mean value reaches 96.6%, and can achieve a detection speed of 43.86 frame/s, and the parameter volume is reduced to 5.07 M. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Detection Algorithm of Laboratory Personnel Irregularities Based on Improved YOLOv7.
- Author
-
Yongliang Yang, Linghua Xu, Maolin Luo, Xiao Wang, and Min Cao
- Subjects
LABORATORY personnel ,COLLEGE environment ,ALGORITHMS ,PERSONNEL management ,FEATURE extraction - Abstract
Due to the complex environment of the university laboratory, personnel flow intensive, personnel irregular behavior is easy to cause security risks. Monitoring using mainstream detection algorithms suffers from low detection accuracy and slow speed. Therefore, the current management of personnel behavior mainly relies on institutional constraints, education and training, on-site supervision, etc., which is time-consuming and ineffective. Given the above situation, this paperproposes animprovedYouOnlyLookOnce version7 (YOLOv7) to achieve the purpose of quickly detecting irregular behaviors of laboratory personnel while ensuring high detection accuracy. First, to better capture the shape features of the target, deformable convolutional networks (DCN) is used in the backbone part of the model to replace the traditional convolution to improve the detection accuracy and speed. Second, to enhance the extraction of important features and suppress useless features, this paper proposes a new convolutional block attention module_efficient channel attention (CBAM_E) for embedding the neck network to improve the model's ability to extract features from complex scenes. Finally, to reduce the influence of angle factor and bounding box regression accuracy, this paper proposes a new a-SCYLLA intersection over union (a-SIoU) instead of the complete intersection over union (CIoU), which improves the regression accuracy while increasing the convergence speed. Comparison experiments on public and homemade datasets show that the improved algorithm outperforms the original algorithm in all evaluation indexes, with an increase of 2.92% in the precision rate, 4.14% in the recall rate, 0.0356 in the weighted harmonic mean, 3.60% in the mAP@0.5 value, and a reduction in the number of parameters and complexity. Compared with the mainstream algorithm, the improved algorithm has higher detection accuracy, faster convergence speed, and better actual recognition effect, indicating the effectiveness of the improved algorithm in this paper and its potential for practical application in laboratory scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Quantum-Proof Secrets.
- Author
-
HOUSTON-EDWARDS, KELSEY
- Subjects
QUANTUM computers ,CRYPTOGRAPHY ,QUANTUM cryptography ,COMPUTER systems ,ALGORITHMS - Abstract
This article discusses the urgent need to develop post-quantum cryptography in order to protect data from being compromised by future quantum computers. Public-key cryptography, which is currently used to secure information, would become ineffective if a quantum computer were able to break it. The National Institute of Standards and Technology (NIST) has launched a contest to find alternative cryptographic algorithms that are resistant to quantum attacks, and 26 algorithms have been selected for further testing. Lattice-based cryptography has emerged as a promising approach, but NIST is exploring other options to avoid relying solely on one type of algorithm. The transition to post-quantum cryptography will require time and upgrades to computer systems and protocols. [Extracted from the article]
- Published
- 2024
30. Infrared image enhancement algorithm based on detail enhancement guided image filtering.
- Author
-
Tan, Ailing, Liao, Hongping, Zhang, Bozhi, Gao, Meijing, Li, Shiyu, Bai, Yang, and Liu, Zehao
- Subjects
IMAGE intensifiers ,INFRARED imaging ,COST functions ,ENTROPY (Information theory) ,ALGORITHMS ,ENTROPY ,SIGNAL-to-noise ratio ,QUANTUM noise ,QUANTUM entropy - Abstract
Because of the unique imaging mechanism of infrared (IR) sensors, IR images commonly suffer from blurred edge details, low contrast, and poor signal-to-noise ratio. A new method is proposed in this paper to enhance IR image details so that the enhanced images can effectively inhibit image noise and improve image contrast while enhancing image details. First, for the traditional guided image filter (GIF) applied to IR image enhancement is prone to halo artifacts, this paper proposes a detail enhancement guided filter (DGIF). It mainly adds the constructed edge perception and detail regulation factors to the cost function of the GIF. Then, according to the visual characteristics of human eyes, this paper applies the detail regulation factor to the detail layer enhancement, which solves the problem of amplifying image noise using fixed gain coefficient enhancement. Finally, the enhanced detail layer is directly fused with the base layer so that the enhanced image has rich detail information. We first compare the DGIF with four guided image filters and then compare the algorithm of this paper with three traditional IR image enhancement algorithms and two IR image enhancement algorithms based on the GIF on 20 IR images. The experimental results show that the DGIF has better edge-preserving and smoothing characteristics than the four guided image filters. The mean values of quantitative evaluation of information entropy, average gradient, edge intensity, figure definition, and root-mean-square contrast of the enhanced images, respectively, achieved about 0.23%, 3.4%, 4.3%, 2.1%, and 0.17% improvement over the optimal parameter. It shows that the algorithm in this paper can effectively suppress the image noise in the detail layer while enhancing the detail information, improving the image contrast, and having a better visual effect. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
31. Blinded by "algo economicus": Reflecting on the assumptions of algorithmic management research to move forward.
- Author
-
Lamers, Laura, Meijerink, Jeroen, and Rettagliata, Giorgio
- Subjects
PERSONNEL management ,REFLECTION (Philosophy) ,MEDICAL research ,MATHEMATICAL models ,ECONOMIC impact ,CONCEPTUAL structures ,ONTOLOGIES (Information retrieval) ,THEORY ,ALGORITHMS ,MANAGEMENT ,ECONOMICS - Abstract
This paper reflects on the paradigmatic assumptions and ideologies that have shaped algorithmic management research. We identify two sets of assumptions: one about the "ontology of algorithms" (which holds that human resource management [HRM] algorithms are non‐human entities with material agency) and one about the "ontology of management" that HRM algorithms afford (which understands algorithmic management as a form of control for maximizing economic/shareholder value). We explain how these core assumptions underpin existing research of HRM algorithms, causing blind spots that hinder new ways of understanding and studying algorithmic management. After identifying and unpacking the assumptions and blind spots, we offer avenues to overcome these blind spots, allowing for future research based on new ideological assumption grounds that will help move algorithmic management scholarship further in significant ways. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. 基于子空间多尺度特征融合的试卷语义分割.
- Author
-
夏源祥, 刘 渝, 楚程钱, 万永菁, and 蒋翠玲
- Subjects
PYRAMIDS ,ALGORITHMS ,HANDWRITING ,CLASSIFICATION ,SUBSPACES (Mathematics) - Abstract
Copyright of Journal of East China University of Science & Technology is the property of Journal of East China University of Science & Technology Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
33. SOFTWARE DEFECT PREDICTION APPROACHES REVISITED.
- Author
-
Shebl, Khaled S., Afify, Yasmine M., and Badr, Nagwa
- Subjects
SEMANTICS ,DATABASES ,ALGORITHMS ,COMPUTER software testing ,MACHINE learning - Abstract
A crucial field in software development and testing is Software Defect Prediction (SDP) because the quality, dependability, efficiency, and cost of the software are all improved by forecasting software defects at an earlier stage. Many existing models predict defects to facilitate software testing process for testers. A comprehensive review of these models from different perspectives is crucial to help new researchers enter this field and learn about its latest developments. Algorithms, method types, datasets, and tools were the only perspectives discussed in the current literature. A comprehensive study that takes into account a wide spectrum of viewpoints hasn't yet been published. Examining the development and advancement of SDP-related studies is the goal of this literature review. It provides a comprehensive and updated state-of-the-art that satisfies all stated criteria. Out of 591 papers retrieved from 6 reputable databases, 73 papers were eligible for analysis. This review addresses relevant research questions regarding techniques & method types, data details, tools, code syntax, semantics, structural and domain information. Motivation to conduct this comprehensive review is to equip the readers with the necessary information and keep them informed about the software defect prediction domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Revisit the scheduling problem with assignable or generalized due dates to minimize total weighted late work.
- Author
-
Chen, Rubing, Gao, Yuan, Geng, Zhichao, and Yuan, Jinjiang
- Abstract
We revisit the single-machine scheduling for minimising the total weighted late work with assignable due dates (ADD-scheduling) and generalised due dates (GDD-scheduling). In particular, we consider the following three problems: (i) the GDD-scheduling problem for minimising the total weighted late work, (ii) the ADD-scheduling problem for minimising the total weighted late work, and (iii) the ADD-scheduling problem for minimising the total late work. In the literature, the above three problems are proved to be NP-hard, but their exact complexity (unary NP-hardness or pseudo-polynomial-time solvability) are unknown. In this paper, we address these open problems by showing that the first two problems are unary NP-hard and the third problem admits pseudo-polynomial-time algorithms. For the third problem, we also present a 2-approximation solution and a fully polynomial-time approximation scheme. Computational experiments show that our algorithms and solutions are efficient. When the jobs have identical processing times, we further present more efficient polynomial-time algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Utilizing tables, figures, charts and graphs to enhance the readability of a research paper.
- Author
-
Divecha C. A., Tullu M. S., and Karande S.
- Subjects
GRAPHIC arts ,READABILITY (Literary style) ,SERIAL publications ,RESEARCH methodology ,COPYRIGHT ,MEDICAL research ,ALGORITHMS - Abstract
The authors offer observation on utilizing tables, figures, charts and graphs to help understand the research presented in a simple manner but also engage and sustain the reader's interest. Topics discussed include benefits provided by the use of tables/figures/charts/graphs, general methodology of design and submission, and copyright issues of using material from government publications/public domain.
- Published
- 2023
- Full Text
- View/download PDF
36. Fast Decision-Tree-Based Series Partitioning and Mode Prediction Termination Algorithm for H.266/VVC.
- Author
-
Li, Ye, He, Zhihao, and Zhang, Qiuwen
- Subjects
VIDEO compression ,VIDEO coding ,TECHNOLOGICAL innovations ,ALGORITHMS ,MULTIMEDIA systems ,PARALLEL algorithms ,COMPUTATIONAL complexity ,DECISION trees ,RANDOM forest algorithms - Abstract
With the advancement of network technology, multimedia videos have emerged as a crucial channel for individuals to access external information, owing to their realistic and intuitive effects. In the presence of high frame rate and high dynamic range videos, the coding efficiency of high-efficiency video coding (HEVC) falls short of meeting the storage and transmission demands of the video content. Therefore, versatile video coding (VVC) introduces a nested quadtree plus multi-type tree (QTMT) segmentation structure based on the HEVC standard, while also expanding the intra-prediction modes from 35 to 67. While the new technology introduced by VVC has enhanced compression performance, it concurrently introduces a higher level of computational complexity. To enhance coding efficiency and diminish computational complexity, this paper explores two key aspects: coding unit (CU) partition decision-making and intra-frame mode selection. Firstly, to address the flexible partitioning structure of QTMT, we propose a decision-tree-based series partitioning decision algorithm for partitioning decisions. Through concatenating the quadtree (QT) partition division decision with the multi-type tree (MT) division decision, a strategy is implemented to determine whether to skip the MT division decision based on texture characteristics. If the MT partition decision is used, four decision tree classifiers are used to judge different partition types. Secondly, for intra-frame mode selection, this paper proposes an ensemble-learning-based algorithm for mode prediction termination. Through the reordering of complete candidate modes and the assessment of prediction accuracy, the termination of redundant candidate modes is accomplished. Experimental results show that compared with the VVC test model (VTM), the algorithm proposed in this paper achieves an average time saving of 54.74%, while the BDBR only increases by 1.61%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Global Maximum Power Point Tracking of Photovoltaic Module Arrays Based on an Improved Intelligent Bat Algorithm.
- Author
-
Chao, Kuei-Hsiang and Bau, Thi Thanh Truc
- Subjects
MAXIMUM power point trackers ,ALGORITHMS ,CLIMATE change ,VOLTAGE - Abstract
In this paper, a method based on an improved intelligent bat algorithm (IIBA) in cooperation with a voltage and current sensor was applied in maximum power point tracking (MPPT) for a photovoltaic module array (PVMA), where the power generation performance of a PVMA was enhanced. Due to the partial shading of the PVMA from climate changes or the surrounding environment, multiple peak values were generated on the power–voltage (P-V) curve, where the conventional MPPT technology could only track the local maximum power point (LMPP), hence the reduction in output power of PVMAs. Therefore, the IIBA-based MPPT was proposed in this paper to solve such issues and to ensure the capability of a PVMA in tracking the global maximum power point (GMPP) and utilization for enhancing the output power of a PVMA. Firstly, the Matlab/Simulink software was used to establish a boost converter model that simulated the actual 4-series–3-parallel PVMA under different shaded conditions, where the P-V curve with 1-peak, 2-peak, 3-peak and 4-peak values were generated. Subsequently, the tracking paces of the conventional bat algorithm (BA) were adjusted according to the gradient of the P-V curve for a PVMA. At the same time, 0.8 times the maximum power point (MPP) voltage V
mp under standard test conditions (STCs) for a PVMA was set as the initial tracking voltage. Lastly, the simulation results proved that under different environmental impacts, the proposed IIBA led to better performances in tracking both dynamic and steady responses. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
38. Partial Discharge Signal Denoising Algorithm Based on Aquila Optimizer–Variational Mode Decomposition and K-Singular Value Decomposition.
- Author
-
Zhong, Jun, Liu, Zhenyu, and Bi, Xiaowen
- Subjects
SIGNAL denoising ,PARTIAL discharges ,HILBERT-Huang transform ,ELECTRIC insulators & insulation ,ALGORITHMS - Abstract
Partial discharge (PD) is a primary factor leading to the deterioration of insulation in electrical equipment. However, it is hard for traditional methods to precisely extract PD signals in increasingly complex engineering environments. This paper proposes a new PD signal denoising method combining Aquila Optimizer–Variational Mode Decomposition (AO-VMD) and K-Singular Value Decomposition (K-SVD) algorithms. Firstly, the AO algorithm optimizes critical parameters of the VMD algorithm. For the PD signal overwhelmed by noise, the AO-VMD algorithm can decompose it and reconstruct it by using kurtosis. In this process, the majority of the noise is removed, and the characteristics of the original signal are shown. Subsequently, the K-SVD algorithm performs sparse decomposition on the signal after OA-VMD, constructs a learned dictionary, and captures the characteristics of the signal for continuous learning and updating. After the dictionary learning is completed, the best matching atoms from the dictionary are selected to precisely reconstruct the original noiseless signal. Finally, the proposed method is compared with three traditional algorithms, Adaptive Ensemble Empirical Mode Decomposition (AEEMD), SVD-VMD, and the Adaptive Wavelet Multilevel Soft Threshold algorithm, on the simulated signal and the actual engineering signal. The results both demonstrate that the algorithm proposed by this paper has superior noise reduction and signal extraction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A Novel IDS with a Dynamic Access Control Algorithm to Detect and Defend Intrusion at IoT Nodes.
- Author
-
Alazab, Moutaz, Awajan, Albara, Alazzam, Hadeel, Wedyan, Mohammad, Alshawi, Bandar, and Alturki, Ryan
- Subjects
INTRUSION detection systems (Computer security) ,ACCESS control ,INTERNET of things ,ALGORITHMS ,FALSE alarms ,MATHEMATICAL analysis - Abstract
The Internet of Things (IoT) is the underlying technology that has enabled connecting daily apparatus to the Internet and enjoying the facilities of smart services. IoT marketing is experiencing an impressive 16.7% growth rate and is a nearly USD 300.3 billion market. These eye-catching figures have made it an attractive playground for cybercriminals. IoT devices are built using resource-constrained architecture to offer compact sizes and competitive prices. As a result, integrating sophisticated cybersecurity features is beyond the scope of the computational capabilities of IoT. All of these have contributed to a surge in IoT intrusion. This paper presents an LSTM-based Intrusion Detection System (IDS) with a Dynamic Access Control (DAC) algorithm that not only detects but also defends against intrusion. This novel approach has achieved an impressive 97.16% validation accuracy. Unlike most of the IDSs, the model of the proposed IDS has been selected and optimized through mathematical analysis. Additionally, it boasts the ability to identify a wider range of threats (14 to be exact) compared to other IDS solutions, translating to enhanced security. Furthermore, it has been fine-tuned to strike a balance between accurately flagging threats and minimizing false alarms. Its impressive performance metrics (precision, recall, and F1 score all hovering around 97%) showcase the potential of this innovative IDS to elevate IoT security. The proposed IDS boasts an impressive detection rate, exceeding 98%. This high accuracy instills confidence in its reliability. Furthermore, its lightning-fast response time, averaging under 1.2 s, positions it among the fastest intrusion detection systems available. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A scalable blockchain based framework for efficient IoT data management using lightweight consensus.
- Author
-
Haque, Ehtisham Ul, Shah, Adil, Iqbal, Jawaid, Ullah, Syed Sajid, Alroobaea, Roobaea, and Hussain, Saddam
- Subjects
DATA management ,INTERNET of things ,NETWORK performance ,BLOCKCHAINS ,SCALABILITY ,ALGORITHMS - Abstract
Recent research has focused on applying blockchain technology to solve security-related problems in Internet of Things (IoT) networks. However, the inherent scalability issues of blockchain technology become apparent in the presence of a vast number of IoT devices and the substantial data generated by these networks. Therefore, in this paper, we use a lightweight consensus algorithm to cater to these problems. We propose a scalable blockchain-based framework for managing IoT data, catering to a large number of devices. This framework utilizes the Delegated Proof of Stake (DPoS) consensus algorithm to ensure enhanced performance and efficiency in resource-constrained IoT networks. DPoS being a lightweight consensus algorithm leverages a selected number of elected delegates to validate and confirm transactions, thus mitigating the performance and efficiency degradation in the blockchain-based IoT networks. In this paper, we implemented an Interplanetary File System (IPFS) for distributed storage, and Docker to evaluate the network performance in terms of throughput, latency, and resource utilization. We divided our analysis into four parts: Latency, throughput, resource utilization, and file upload time and speed in distributed storage evaluation. Our empirical findings demonstrate that our framework exhibits low latency, measuring less than 0.976 ms. The proposed technique outperforms Proof of Stake (PoS), representing a state-of-the-art consensus technique. We also demonstrate that the proposed approach is useful in IoT applications where low latency or resource efficiency is required. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Image convolution techniques integrated with YOLOv3 algorithm in motion object data filtering and detection.
- Author
-
Cheng, Mai and Liu, Mengyuan
- Subjects
TRACKING algorithms ,FILTERS & filtration ,VIDEO surveillance ,ALGORITHMS ,IMAGE segmentation ,RESEARCH personnel ,JOGGING - Abstract
In order to address the challenges of identifying, detecting, and tracking moving objects in video surveillance, this paper emphasizes image-based dynamic entity detection. It delves into the complexities of numerous moving objects, dense targets, and intricate backgrounds. Leveraging the You Only Look Once (YOLOv3) algorithm framework, this paper proposes improvements in image segmentation and data filtering to address these challenges. These enhancements form a novel multi-object detection algorithm based on an improved YOLOv3 framework, specifically designed for video applications. Experimental validation demonstrates the feasibility of this algorithm, with success rates exceeding 60% for videos such as "jogging", "subway", "video 1", and "video 2". Notably, the detection success rates for "jogging" and "video 1" consistently surpass 80%, indicating outstanding detection performance. Although the accuracy slightly decreases for "Bolt" and "Walking2", success rates still hover around 70%. Comparative analysis with other algorithms reveals that this method's tracking accuracy surpasses that of particle filters, Discriminative Scale Space Tracker (DSST), and Scale Adaptive Multiple Features (SAMF) algorithms, with an accuracy of 0.822. This indicates superior overall performance in target tracking. Therefore, the improved YOLOv3-based multi-object detection and tracking algorithm demonstrates robust filtering and detection capabilities in noise-resistant experiments, making it highly suitable for various detection tasks in practical applications. It can address inherent limitations such as missed detections, false positives, and imprecise localization. These improvements significantly enhance the efficiency and accuracy of target detection, providing valuable insights for researchers in the field of object detection, tracking, and recognition in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A flocking control algorithm of multi-agent systems based on cohesion of the potential function.
- Author
-
Li, Chenyang, Yang, Yonghui, Jiang, Guanjie, and Chen, Xue-Bo
- Subjects
COHESION ,POTENTIAL functions ,MULTIAGENT systems ,SOCIAL distance ,SOCIAL cohesion ,ALGORITHMS ,CHANGE agents - Abstract
Flocking cohesion is critical for maintaining a group's aggregation and integrity. Designing a potential function to maintain flocking cohesion unaffected by social distance is challenging due to the uncertainty of real-world conditions and environments that cause changes in agents' social distance. Previous flocking research based on potential functions has primarily focused on agents' same social distance and the attraction–repulsion of the potential function, ignoring another property affecting flocking cohesion: well depth, as well as the effect of changes in agents' social distance on well depth. This paper investigates the effect of potential function well depths and agent's social distances on the multi-agent flocking cohesion. Through the analysis, proofs, and classification of these potential functions, we have found that the potential function well depth is proportional to the flocking cohesion. Moreover, we observe that the potential function well depth varies with the agents' social distance changes. Therefore, we design a segmentation potential function and combine it with the flocking control algorithm in this paper. It enhances flocking cohesion significantly and has good robustness to ensure the flocking cohesion is unaffected by variations in the agents' social distance. Meanwhile, it reduces the time required for flocking formation. Subsequently, the Lyapunov theorem and the LaSalle invariance principle prove the stability and convergence of the proposed control algorithm. Finally, this paper adopts two subgroups with different potential function well depths and social distances to encounter for simulation verification. The corresponding simulation results demonstrate and verify the effectiveness of the flocking control algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Performance analysis of deep learning-based object detection algorithms on COCO benchmark: a comparative study.
- Author
-
Tian, Jiya, Jin, Qiangshan, Wang, Yizong, Yang, Jie, Zhang, Shuping, and Sun, Dengxun
- Subjects
OBJECT recognition (Computer vision) ,DEEP learning ,MACHINE learning ,ALGORITHMS ,SMART cities ,URBAN renewal - Abstract
This paper thoroughly explores the role of object detection in smart cities, specifically focusing on advancements in deep learning-based methods. Deep learning models gain popularity for their autonomous feature learning, surpassing traditional approaches. Despite progress, challenges remain, such as achieving high accuracy in urban scenes and meeting real-time requirements. The study aims to contribute by analyzing state-of-the-art deep learning algorithms, identifying accurate models for smart cities, and evaluating real-time performance using the Average Precision at Medium Intersection over Union (IoU) metric. The reported results showcase various algorithms' performance, with Dynamic Head (DyHead) emerging as the top scorer, excelling in accurately localizing and classifying objects. Its high precision and recall at medium IoU thresholds signify robustness. The paper suggests considering the mean Average Precision (mAP) metric for a comprehensive evaluation across IoU thresholds, if available. Despite this, DyHead stands out as the superior algorithm, particularly at medium IoU thresholds, making it suitable for precise object detection in smart city applications. The performance analysis using Average Precision at Medium IoU is reinforced by the Average Precision at Low IoU (APL), consistently depicting DyHead's superiority. These findings provide valuable insights for researchers and practitioners, guiding them toward employing DyHead for tasks prioritizing accurate object localization and classification in smart cities. Overall, the paper navigates through the complexities of object detection in urban environments, presenting DyHead as a leading solution with robust performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An Improved Evolutionary Multi-Objective Clustering Algorithm Based on Autoencoder.
- Author
-
Qiu, Mingxin, Zhang, Yingyao, Lei, Shuai, and Gu, Miaosong
- Subjects
ALGORITHMS ,EVOLUTIONARY algorithms ,DEEP learning - Abstract
Evolutionary multi-objective clustering (EMOC) algorithms have gained popularity recently, as they can obtain a set of clustering solutions in a single run by optimizing multiple objectives. Particularly, in one type of EMOC algorithm, the number of clusters k is taken as one of the multiple objectives to obtain a set of clustering solutions with different k. However, the numbers of clusters k and other objectives are not always in conflict, so it is impossible to obtain the clustering solutions with all different k in a single run. Therefore, evolutionary multi-objective k-clustering (EMO-KC) has recently been proposed to ensure this conflict. However, EMO-KC could not obtain good clustering accuracy on high-dimensional datasets. Moreover, EMO-KC's validity is not ensured as one of its objectives (SSD
exp , which is transformed from the sum of squared distances (SSD)) could not be effectively optimized and it could not avoid invalid solutions in its initialization. In this paper, an improved evolutionary multi-objective clustering algorithm based on autoencoder (AE-IEMOKC) is proposed to improve the accuracy and ensure the validity of EMO-KC. The proposed AE-IEMOKC is established by combining an autoencoder with an improved version of EMO-KC (IEMO-KC) for better accuracy, where IEMO-KC is improved based on EMO-KC by proposing a scaling factor to help effectively optimize the objective of SSDexp and introducing a valid initialization to avoid the invalid solutions. Experimental results on several datasets demonstrate the accuracy and validity of AE-IEMOKC. The results of this paper may provide some useful information for other EMOC algorithms to improve accuracy and convergence. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
45. Study on Relay Contact Bounce Based on the Adaptive Weight Rotation Template Matching Algorithm.
- Author
-
Zhao, Wenze, Yan, Jiaxing, Wang, Xin, Li, Wenhua, Yang, Xinglin, and Wang, Weiming
- Subjects
KINETIC energy ,ROTATIONAL motion ,CONTACT angle ,ALGORITHMS ,IMAGE processing ,ANGLES - Abstract
In order to analyze the relay action process from an imaging perspective and further investigate the bounce phenomenon of relay contacts during the contact process, this paper utilizes a high-speed shooting platform to capture images of relay action. In light of the situation where the stationary contact in the image is inclined and continuously changing, a rotation template matching algorithm based on adaptive weight is proposed. The algorithm identifies and obtains the inclination angle of the stationary contact, enabling the study of the relay contact bounce process. By extracting contact bounce distance data from the images, a bounce process curve is plotted. Combined with the analysis of the contact bounce process, the reasons for the bounce are explored. The results indicate that the proposed rotation template matching algorithm can accurately identify stationary contacts and their angles at different angles. By analyzing the contact status and bounce process of the relay contacts in conjunction with the relay structure, parameters such as the bounce time, bounce height, and time required to reach the maximum distance can be calculated. Additionally, the main reason for contact bounce in the relay studied in this paper is the limitation imposed on the continued movement of the stationary contact by the presence of the relay brackets when the kinetic energy of the contact is too high. This phenomenon occurs during the first vibration peak in the vibration process after the moving contact contacts the stationary contact. The research results provide a reference for further studying the relay contact bounce process, optimizing relay structure, and suppressing contact bounce. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Algorithms for Liver Segmentation in Computed Tomography Scans: A Historical Perspective.
- Author
-
Niño, Stephanie Batista, Bernardino, Jorge, and Domingues, Inês
- Subjects
COMPUTED tomography ,IMAGE processing ,COMPUTER-assisted image analysis (Medicine) ,ARTIFICIAL intelligence ,ALGORITHMS ,IMAGE reconstruction algorithms - Abstract
Oncology has emerged as a crucial field of study in the domain of medicine. Computed tomography has gained widespread adoption as a radiological modality for the identification and characterisation of pathologies, particularly in oncology, enabling precise identification of affected organs and tissues. However, achieving accurate liver segmentation in computed tomography scans remains a challenge due to the presence of artefacts and the varying densities of soft tissues and adjacent organs. This paper compares artificial intelligence algorithms and traditional medical image processing techniques to assist radiologists in liver segmentation in computed tomography scans and evaluates their accuracy and efficiency. Despite notable progress in the field, the limited availability of public datasets remains a significant barrier to broad participation in research studies and replication of methodologies. Future directions should focus on increasing the accessibility of public datasets, establishing standardised evaluation metrics, and advancing the development of three-dimensional segmentation techniques. In addition, maintaining a collaborative relationship between technological advances and medical expertise is essential to ensure that these innovations not only achieve technical accuracy, but also remain aligned with clinical needs and realities. This synergy ensures their applicability and effectiveness in real-world healthcare environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Dynamic phasor measurement algorithm based on high-precision time synchronization.
- Author
-
Jie Zhang, Fuxin Li, Zhengwei Chang, Chunhua Hu, Chun Liu, and Sihao Tang
- Subjects
PHASOR measurement ,COVARIANCE matrices ,ELECTRIC power ,ELECTRIC power distribution grids ,SYNCHRONIZATION ,ALGORITHMS ,KALMAN filtering - Abstract
Ensuring the swift and precise tracking of power system signal parameters, especially the frequency, is imperative for the secure and stable operation of power grids. In instances of faults within the distribution network, abrupt changes in frequency may occur, presenting a challenge for existing algorithms that struggle to effectively track such signal variations. Addressing the need for enhanced performance in the face of frequency mutations, this paper introduces an innovative approach--the Covariance Reconstruction Extended Kalman Filter (CREKF) algorithm. Initially, the dynamic signal model of electric power is meticulously analyzed, establishing a dynamic signal relationship based on high-precision time source sampling tailored to the signal model's characteristics. Subsequently, the filter gain, covariance matrix, and variance iteration equation are determined based on the signal relationship among three sampling points. In a final step, recognizing the impact of the covariance matrix on algorithmic tracking ability, the paper proposes a covariance matrix reset mechanism utilizing hysteresis induced by output errors. Through extensive verification with simulated signals, the results conclusively demonstrate that the CREKF algorithm exhibits superior measurement accuracy and accelerated tracking speed when confronted with mutating signals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Research on WSN reliable ranging and positioning algorithm for forest environment.
- Author
-
Wu, Peng, Yu, Le, Yi, Xiaomei, Xu, Liang, Liu, LiJuan, Yi, YuTong, Jiang, Tengteng, and Tao, Chunling
- Subjects
WIRELESS sensor networks ,ALGORITHMS - Abstract
Wireless sensor network (WSN) location is a significant research area. In complex environments like forests, inaccurate signal intensity ranging is a major challenge. To address this issue, this paper presents a reliable WSN distance measurement-positioning algorithm for forest environments. The algorithm divides the positioning area into several sub-regions based on the discrete coefficient of the collected signal strength. Then, using the fitting method based on the signal intensity value of each sub-region, the algorithm derives the reference points of the logarithmic distance path loss model and path loss index. Finally, the algorithm locates target nodes using anchor nodes in different regions. Additionally, to enhance the positioning accuracy, weight values are assigned to the positioning result based on the discrete coefficient of the signal intensity in each sub-region. Experimental results demonstrate that the proposed WSN algorithm has high precision in forest environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Path Planning for AUVs Based on Improved APF-AC Algorithm.
- Author
-
Guojun Chen, Danguo Cheng, Wei Chen, Xue Yang, and Tiezheng Guo
- Subjects
ANT algorithms ,AUTONOMOUS underwater vehicles ,UNDERWATER exploration ,AUTOMATION equipment ,ALGORITHMS ,SUBMERSIBLES - Abstract
With the increase in ocean exploration activities and underwater development, the autonomous underwater vehicle (AUV) has been widely used as a type of underwater automation equipment in the detection of underwater environments. However, nowadays AUVs generally have drawbacks such as weak endurance, low intelligence, and poor detection ability. The research and implementation of path-planning methods are the premise of AUVs to achieve actual tasks. To improve the underwater operation ability of the AUV, this paper studies the typical problems of path-planning for the ant colony algorithm and the artificial potential field algorithm. In response to the limitations of a single algorithm, an optimization scheme is proposed to improve the artificial potential field ant colony (APF-AC) algorithm. Compared with traditional ant colony and comparative algorithms, the APF-AC reduced the path length by 1.57% and 0.63% (in the simple environment), 8.92% and 3.46% (in the complex environment). The iteration time has been reduced by approximately 28.48% and 18.05% (in the simple environment), 18.53% and 9.24% (in the complex environment). Finally, the improved APF-AC algorithm has been validated on the AUV platform, and the experiment is consistent with the simulation. Improved APF-AC algorithm can effectively reduce the underwater operation time and overall power consumption of the AUV, and shows a higher safety. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. A Segmented Hybrid Algorithm for Beam Shaping Combining Iterative and Simulated Annealing Approaches.
- Author
-
Zhang, Xiaoyu, Zhang, Qi, and Chen, Genxiang
- Subjects
SIMULATED annealing ,STANDARD deviations ,ALGORITHMS ,OPTICAL communications ,LASER beams - Abstract
In recent years, laser technology has made significant advancements, yet there are specific requirements for the energy concentration and uniformity of lasers in various fields, such as optical communication, laser processing, 3D printing, etc. Beam shaping technology enables the transformation of ordinary Gaussian-distributed laser beams into square or circular flat-top uniform beams. Currently, LCOS-based beam shaping algorithms do not adequately meet these requirements, and most of these algorithms do not simultaneously consider the impact of phase quantization and zero-padding, leading to a decrease in the practicality of phase holograms. To address these issues, this paper proposes a novel segmented beam shaping algorithm that combines iterative and simulated annealing approaches. This paper validated the reliability of the proposed algorithm through numerical simulations. Compared to other algorithms, the proposed algorithm can effectively reduce the root mean square error by an average of nearly 37% and decrease the uniformity error by almost 39% without a significant decrease in diffraction efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.