40,896 results
Search Results
152. Paper-based dosing algorithms for maintenance of warfarin anticoagulation.
- Author
-
Wilson, Sarah, Costantini, Lorrie, and Crowther, Mark
- Abstract
We examined the quality of anticoagulation produced by two paper-based warfarin dosing algorithms in a randomized clinical trial of warfarin therapy. Fifty-eight patients were randomized to receive warfarin at a target international normalized ratio (INR) range of 2.1–3.0 and were followed for an average of 2.7 years. As a proportion of total patient-time, the percentage of time spent above, within, and below the therapeutic range was 11%, 71%, and 19% respectively. Fifty-six patients were randomized to receive warfarin at a higher target INR range (3.1–4.0) and had INRs within the therapeutic range for 40% of total patient time. We conclude that the performance, minimal cost, and ease-of-use of these algorithms make them well-suited for patient management within primary-care and research settings. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
153. A new implicit blending technique for volumetric modelling.
- Author
-
Bogdan Lipu and Nikola Guid
- Subjects
PAPER ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Abstract Current implicit blending techniques are mostly designed for use in surface modelling, where only boundaries of the object defined by the implicit primitives are important. In contrast, in volumetric implicit modelling the interior of the object is also significant, which requires different and more suitable techniques for combining implicit primitives. In this paper, we first discuss irregularities that occur using the current techniques. Then, a new technique for blending implicit primitives, especially appropriate in volumetric modelling (e.g., cloud modelling), is introduced. It overcomes these abnormalities and gives us better results than current techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2005
154. Trend analysis of global usage of digital soil mapping models in the prediction of potentially toxic elements in soil/sediments: a bibliometric review.
- Author
-
Agyeman, Prince Chapman, Ahado, Samuel Kudjo, Borůvka, Luboš, Biney, James Kobina Mensah, Sarkodie, Vincent Yaw Oppong, Kebonye, Ndiye M., and Kingsley, John
- Subjects
DIGITAL soil mapping ,TREND analysis ,PREDICTION models ,GLOBAL analysis (Mathematics) ,SOIL pollution - Abstract
The rising and continuous pollution of the soil from anthropogenic activities is of great concern. Owing to this concern, the advent of digital soil mapping (DSM) has been a tool that soil scientists use in this era to predict the potentially toxic element (PTE) content in the soil. The purpose of this paper was to conduct a review of articles, summarize and analyse the spatial prediction of potentially toxic elements, determine and compare the models' usage as well as their performance over time. Through Scopus, the Web of Science and Google Scholar, we collected papers between the year 2001 and the first quarter of 2019, which were tailored towards the spatial PTE prediction using DSM approaches. The results indicated that soil pollution emanates from diverse sources. However, it provided reasons why the authors investigate a piece of land or area, highlighting the uncertainties in mapping, number of publications per journal and continental efforts to research as well as published on trending issues regarding DSM. This paper reveals the complementary role machine learning algorithms and the geostatistical models play in DSM. Nevertheless, geostatistical approaches remain the most preferred model compared to machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
155. LOCP: Latency-optimized channel pruning for CNN inference acceleration on GPUs.
- Author
-
Zhang, Yonghua, Jiang, Hongxu, Zhu, Yuting, Zhang, Runhua, Cao, Yongxiang, Zhu, Chenhui, Wang, Wei, Dong, Dong, and Li, Xiaobin
- Subjects
CONVOLUTIONAL neural networks ,ALGORITHMS - Abstract
Channel pruning has recently become a widely used model compression method. However, most existing channel pruning methods only prune to decrease the model size, such as the number of parameters or FLOPs, and hence the decrease in model size does not effectively lead to an improvement in inference performance. To address this problem, this paper proposes a latency-optimized channel pruning method for CNN inference acceleration on GPU platforms by latency stair-step discrimination, two-stage benefit assessment and latency-sharing channel pruning. Compared with recent state-of-the-art model compression methods, it can achieve significant improvements in inference performance with comparable compression rates and model accuracy. The contributions of this paper include the following: first, a three-point latency stair-step discrimination method is proposed for determining the candidate prunable coordinates with the best latency performance adapted to the current hardware. Then, a two-stage benefit assessment method based on interlayer dependencies is proposed for determining the optimal channel pruning rate of each layer in the network. Finally, a latency-sharing channel pruning framework is proposed to accelerate the model pruning adaptation process. The method proposed in this paper can significantly reduce the model inference latency on multiple types of GPU platforms. To verify the effectiveness, we use three general-purpose GPU platforms and two embedded GPU platforms to evaluate the algorithm performance. The experimental results show that for recent state-of-the-art CNNs, the proposed method can achieve a 22.0–6.6% latency reduction and a 1.3 –3.0 inference performance improvement as well as a 1.2–4.3 pruning adaptation speedup with high model accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
156. DNA dynamic coding-based encryption algorithm for vector map considering global objects.
- Author
-
Yan, Qingbo, Yan, Haowen, Zhang, Liming, Wang, Xiaolong, Li, Pengbo, and Yan, Xiaojing
- Subjects
VECTOR data ,DNA ,ELECTRONIC data processing ,DATA warehousing ,ALGORITHMS - Abstract
With the rapid development of digitalization and networking, copying and sharing vector map data has become convenient, but it also brings security risks such as data interception and tampering. Current encryption methods focus on partially encrypting objects, which may leave some sensitive and confidential objects unencrypted. Additionally, the encryption effect for the point layers is not satisfactory. This paper proposes an algorithm for encrypting vector maps based on DNA dynamic encoding. Initially, global scrambling is performed on all object coordinates using double random position permutation, and a four-dimensional hyperchaotic system is selected to ensure the complexity of the chaotic sequence. Next, DNA dynamic coding operations are applied to whole layers of the vector map to encrypt all data. Finally, the encrypted data can be decrypted and restored according to the DNA coding rules and the double random position permutation mapping relationship, with the decrypted data being consistent with the original. Experimental results indicate that the proposed algorithm can be applied to the encryption protection of various vector map elements, especially to improve the performance on encrypting point layer data compared with existing encryption algorithms. It improves the security of vector data in the process of storage and transmission, and has potential application value in the protection of vector map. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
157. A detection method for the ridge beast based on improved YOLOv3 algorithm.
- Author
-
Hou, Miaole, Hao, Wuchen, Dong, Youqiang, and Ji, Yuhang
- Subjects
BUILDING repair ,DATA mining ,ALGORITHMS ,FEATURE extraction ,HERITAGE tourism ,DECEPTION - Abstract
The ridge beast is a beast placed on the ridge of the roof of ancient Chinese buildings, not only has a decorative function, and has a strict hierarchical meaning, the number and form of the ridge beast placed on different levels of buildings are strictly limited. The detection technology of ridge beast decorative parts has important application value in the fields of fine 3D reconstruction of ancient buildings, historical dating and cultural and tourism services. Aiming at the problem of poor detection performance of traditional detection algorithms due to high texture similarity and poor discrimination of ridge beast, this paper proposed an improved YOLOv3 based detection algorithm for ridge beast decorative pieces. In terms of basic network improvement, local features are aggregated to the deep separable convolution internal embedding summation layer, and point convolution is used to connect the channel information of original features and aggregated features, so as to expand the receptive field and learn more diverse features. The residual structure of the feature extraction network was constructed by using the convolution, and the extraction effect of the model on the fine-grained features of the ridge beast was optimized, so that the detection accuracy was improved. In the prediction head improvement of the model, the original linear structure was reconstructed, and the extrusion and excitation modules were introduced to model the channel relationship of multi-scale feature map, which suppressed the response of interference signals and made the feature more directivity. The parallel 1 × 1 and 3 × 3 convolution are used to construct a multi-size convolution structure, which enhances the semantic information extraction ability of the model and further improves the detection effect. Experiments were conducted on the constructed ridge-beast dataset, and the results showed that the mAP of the improved algorithm can reach 86.48%, which is 3.05% higher than YOLO-v3, and the model parameters are reduced by 70%, which has a better detection performance and can provide a reference for the automated detection of ancient building components. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
158. Feature selection algorithm for usability engineering: a nature inspired approach.
- Author
-
Jain, Rajat, Joseph, Tania, Saxena, Anvita, Gupta, Deepak, Khanna, Ashish, Sagar, Kalpna, and Ahlawat, Anil K.
- Subjects
FEATURE selection ,OPTIMIZATION algorithms ,METAHEURISTIC algorithms ,COMPUTER software quality control ,ALGORITHMS ,COMPUTER software development - Abstract
Software usability is usually used in reference to the hierarchical software usability model by researchers and is an important aspect of user experience and software quality. Thus, evaluation of software usability is an essential parameter for managing and regulating a software. However, it has been difficult to establish a precise evaluation method for this problem. A large number of usability factors have been suggested by many researchers, each covering a set of different factors to increase the degree of user friendliness of a software. Therefore, the selection of the correct determining features is of paramount importance. This paper proposes an innovative metaheuristic algorithm for the selection of most important features in a hierarchical software model. A hierarchy-based usability model is an exhaustive interpretation of the factors, attributes, and its characteristics in a software at different levels. This paper proposes a modified version of grey wolf optimisation algorithm (GWO) termed as modified grey wolf optimization (MGWO) algorithm. The mechanism of this algorithm is based on the hunting mechanism of wolves in nature. The algorithm chooses a number of features which are then applied to software development life cycle models for finding out the best among them. The outcome of this application is also compared with the conventional grey wolf optimization algorithm (GWO), modified binary bat algorithm (MBBAT), modified whale optimization algorithm (MWOA), and modified moth flame optimization (MMFO). The results show that MGWO surpasses all the other relevant optimizers in terms of accuracy and produces a lesser number of attributes equal to 8 as compared to 9 in MMFO and 12 in MBBAT and 19 in MWOA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
159. The potential of point-of-care diagnostics to optimise prehospital trauma triage: a systematic review of literature.
- Author
-
Stojek, Leonard, Bieler, Dan, Neubert, Anne, Ahnert, Tobias, and Imach, Sebastian
- Subjects
ONLINE information services ,MEDICAL triage ,MEDICAL information storage & retrieval systems ,POINT-of-care testing ,RESEARCH funding ,DESCRIPTIVE statistics ,WOUNDS & injuries ,DATA analysis software ,MEDLINE ,ADVANCED trauma life support ,EMERGENCY medicine ,ALGORITHMS - Abstract
Purpose: In the prehospital care of potentially seriously injured patients resource allocation adapted to injury severity (triage) is a challenging. Insufficiently specified triage algorithms lead to the unnecessary activation of a trauma team (over-triage), resulting in ineffective consumption of economic and human resources. A prehospital trauma triage algorithm must reliably identify a patient bleeding or suffering from significant brain injuries. By supplementing the prehospital triage algorithm with in-hospital established point-of-care (POC) tools the sensitivity of the prehospital triage is potentially increased. Possible POC tools are lactate measurement and sonography of the thorax, the abdomen and the vena cava, the sonographic intracranial pressure measurement and the capnometry in the spontaneously breathing patient. The aim of this review was to assess the potential and to determine diagnostic cut-off values of selected instrument-based POC tools and the integration of these findings into a modified ABCDE based triage algorithm. Methods: A systemic search on MEDLINE via PubMed, LIVIVO and Embase was performed for patients in an acute setting on the topic of preclinical use of the selected POC tools to identify critical cranial and peripheral bleeding and the recognition of cerebral trauma sequelae. For the determination of the final cut-off values the selected papers were assessed with the Newcastle–Ottawa scale for determining the risk of bias and according to various quality criteria to subsequently be classified as suitable or unsuitable. PROSPERO Registration: CRD 42022339193. Results: 267 papers were identified as potentially relevant and processed in full text form. 61 papers were selected for the final evaluation, of which 13 papers were decisive for determining the cut-off values. Findings illustrate that a preclinical use of point-of-care diagnostic is possible. These adjuncts can provide additional information about the expected long-term clinical course of patients. Clinical outcomes like mortality, need of emergency surgery, intensive care unit stay etc. were taken into account and a hypothetic cut-off value for trauma team activation could be determined for each adjunct. The cut-off values are as follows: end-expiratory CO
2 : < 30 mm/hg; sonography thorax + abdomen: abnormality detected; lactate measurement: > 2 mmol/L; optic nerve diameter in sonography: > 4.7 mm. Discussion: A preliminary version of a modified triage algorithm with hypothetic cut-off values for a trauma team activation was created. However, further studies should be conducted to optimize the final cut-off values in the future. Furthermore, studies need to evaluate the practical application of the modified algorithm in terms of feasibility (e.g. duration of application, technique, etc.) and the effects of the new algorithm on over-triage. Limiting factors are the restriction with the search and the heterogeneity between the studies (e.g. varying measurement devices, techniques etc.). [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
160. The object migration automata: its field, scope, applications, and future research challenges.
- Author
-
Oommen, B. John, Omslandseter, Rebekka Olsson, and Jiao, Lei
- Subjects
ARTIFICIAL intelligence ,NP-hard problems ,ROBOTS ,ALGORITHMS ,MACHINE theory ,PARTITIONS (Mathematics) - Abstract
Partitioning, in and of itself, is an NP-hard problem. Prior to the Artificial Intelligence (AI)-based solutions, it was solved in the 1970s by optimization-based strategies. However, AI-based solutions appeared in the 1980s in a pioneering way, by using a Learning Automaton (LA)-motivated strategy known as the so-called Object Migrating Automaton (OMA). Although the OMA and its derivatives have been used in numerous applications since then, the basic kernel has remained the same. Because the number of possible partitions in a partitioning problem can be combinatorially exponential and the underlying tasks are NP-hard, the most advanced OMA algorithms could, until recently, only solve issues involving equally sized groups. Due to our recent innovations cited in the body of this paper, the enhanced OMA now also handles non-equally sized groups. Earlier, we had presented in Omslandseter (Pattern Anal Appl, 2023), a comprehensive survey of the state-of-the-art enhancements of the best-known OMA. We believe that these results will be the benchmark for a few decades and that it will be very hard to beat these results. This is a companion paper, intended to augment the contents of Omslandseter (Pattern Anal Appl, 2023). In this paper, we first discuss the OMA's prior applications, its historical and current innovations, and the OMA-based algorithms' relevance to societal needs. We also provide well-specified guidelines for future researchers so that they can use them for unresolved tasks, and also develop further advancements. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
161. SEFSD: an effective deployment algorithm for fog computing systems.
- Author
-
Chen, Huan, Chang, Wei-Yan, Chiu, Tai-Lin, Chiang, Ming-Chao, and Tsai, Chun-Wei
- Subjects
METAHEURISTIC algorithms ,COMPUTER systems ,ALGORITHMS ,DATA transmission systems ,QUALITY of service ,PROBLEM solving - Abstract
Fog computing aims to mitigate data communication delay by deploying fog nodes to provide servers in the proximity of users and offload resource-hungry tasks that would otherwise be sent to distant cloud servers. In this paper, we propose an effective fog device deployment algorithm based on a new metaheuristic algorithm–search economics–to solve the optimization problem for the deployment of fog computing systems. The term "effective" in this paper refers to that the developed algorithm can achieve better performance in terms of metrics such as lower latency and less resource usage. Compared with conventional metaheuristic algorithms, the proposed algorithm is unique in that it first divides the solution space into a set of regions to increase search diversity of the search and then allocates different computational resources to each region according to its potential. To verify the effectiveness of the proposed algorithm, we compare it with several classical fog computing deployment algorithms. The simulation results indicate that the proposed algorithm provides lower network latency and higher quality of service than the other deployment algorithms evaluated in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
162. Fast calibration stitching algorithm for underwater camera.
- Author
-
Wang, Zhanhua, Tang, Zhijie, Huang, Jingke, and Li, Jianda
- Subjects
UNDERWATER cameras ,ALGORITHMS ,IMAGE registration ,COORDINATES - Abstract
Underwater environment is complex and changeable. In order to obtain more underwater environment information. The larger the field of view of underwater images collected by ROV, the more information is contained. Effective methods to obtain large field of view include fish-eye lens and image stitching. In order to obtain larger field information, we combine the two methods and propose a splicing algorithm that can be applied to fish-eye lenses. This algorithm includes two parts, the first part is correction the fish-eye images, on the basis of the traditional chessboard correction method to improve, this paper put forward a new adaptive gray level method, this method can keep more angular point features, can be more accurate extraction of checkerboard angular point, will be further accurate correction result. In order to achieve real-time underwater patchwork effect. For stitching the corrected images, this paper proposes a fast stitching algorithm (FASTITCH), in the process of image stitching, the algorithm can preserve image feature points and transposed matrix of image matching, so as to calculate the new coordinates, joining together the original feature points in the image. Using this coordinate to match the feature points of another image can save the time of finding feature points in the stitching image, and finally speed up the stitching and complete the task of real-time stitching. The experiment proves that: The error obtained by the new correction method is smaller. Compared with the traditional feature point stitching algorithm, the fast stitching algorithm (FASTITCN) proposed in this paper can shorten the stitching time by about 20%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
163. Research on Cascading Fault Location of Chemical Material Networks Based on BFS-Time-Reversal Backpropagation Algorithm.
- Author
-
Wang, Zheng, Li, Huapeng, Liu, Ruijie, Hou, Jingmin, Dong, Ran, Hu, Yiyi, Jia, Xiaoping, and Wang, Fang
- Subjects
FAULT location (Engineering) ,ELECTRIC fault location ,MAXIMUM likelihood statistics ,ALGORITHMS ,CHEMICAL models - Abstract
It is of great theoretical value and practical significance to study the location of cascading failures in chemical material networks with large and complex characteristics. Previous mainly used the method of obtaining all information of nodes, maximum likelihood estimation function, and distance centrality to locate faults. These methods need to determine the status of all nodes in the network or perform operations on the nodes in the entire network related to the observation node, the localization process is more complicated and costly, and the results are not accurate enough. This paper proposes a positioning method based on the breadth-first search strategy (BFS)-time-reversal backpropagation algorithm to study the cascading fault location of chemical material networks. Firstly, the cascade fault propagation model of the chemical material network is constructed according to the relevant theory of the complex network, and the observation node selection strategy selects the observation node. Secondly, use the BFS to search out the fault area, then make assumptions about the time delay of the propagation process, and use the time-reversal backpropagation algorithm to filter the nodes in the fault area to realize the cascade fault location of the chemical material network. Finally, change the observation node selection strategy, analyze the positioning accuracy under each strategy, and select the optimal selection strategy for positioning using the method in this paper. Case analysis shows that this method can effectively realize the cascading fault location in the chemical material network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
164. An image encryption algorithm based on pixel bit operation and nonlinear chaotic system.
- Author
-
Wang, Xingyuan and Chen, Shengnan
- Subjects
IMAGE encryption ,NONLINEAR systems ,ALGORITHMS ,PIXELS ,CHAOS theory - Abstract
This paper proposes a new one-dimensional chaotic map—nonlinear coupled Sine-Tent-Logistic chaotic map (1DNCSTL). A series of tests on this map show that the map has the characteristics of randomness and sensitivity to initial values and is suitable for image encryption. Based on this map, the article further proposes pixel bit position scrambling and reorganization operation and dynamic nonunique diffusion operation. Pixel bit position scrambling and reorganization operation is different from the traditional scrambling operation that only changes the position of the pixel value; this operation can achieve the effect of changing the pixel position and pixel value at the same time. The dynamic nonunique diffusion operation is different from the traditional unique-formula diffusion operation. The diffusion formula is not unique, which can ensure the security of the algorithm. Simulation experiment results and various security performance analysis show that the algorithm proposed in this paper has good performance. Compared with other encryption schemes, this algorithm is more suitable for image encryption. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
165. Preface to the Special Issue on the 17th Algorithms and Data Structures Symposium (WADS 2021).
- Author
-
He, Meng, Lubiw, Anna, and Salavatipour, Mohammad
- Subjects
DATA structures ,ALGORITHMS ,CONFERENCES & conventions ,LEGISLATIVE committees ,PLANAR graphs - Abstract
We would like to thank the authors for contributing their manuscripts, the referees for thoroughly reviewing these articles and the WADS 2021 program committee for helping with the selection of these articles. This special issue is dedicated to selected papers from those presented at the 17th Algorithms and Data Structures Symposium (WADS 2021), which was held from August 9-11 fully online. [Extracted from the article]
- Published
- 2023
- Full Text
- View/download PDF
166. AGV monocular vision localization algorithm based on Gaussian saliency heuristic.
- Author
-
Fu, Heng, Hu, Yakai, Zhao, Shuhua, Zhu, Jianxin, Liu, Benxue, and Yang, Zhen
- Subjects
FEATURE extraction ,HEURISTIC ,MONOCULAR vision ,ALGORITHMS ,AUTOMATED guided vehicle systems - Abstract
To address the issues of poor detection accuracy and the large number of target detection model parameters in existing AGV monocular vision location detection algorithms, this paper presents an AGV vision location method based on Gaussian saliency heuristic. The proposed method introduces a fast and accurate AGV visual detection network called GAGV-net. In the GAGV-net network, a Gaussian saliency feature extraction module is designed to enhance the network's feature extraction capability, thereby reducing the required output for model fitting. To improve the accuracy of target detection, a joint multi-scale classification and detection task header are designed at the stage of target frame regression to classification. This header utilizes target features of different scales, thereby enhancing the accuracy of target detection. Experimental results demonstrate a 12% improvement in detection accuracy and a 27.38 FPS increase in detection speed compared to existing detection methods. Moreover, the proposed detection network significantly reduces the model's size, enhances the network model's deployability on AGVs, and greatly improves detection accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
167. Research on the characteristics of total-field data converted from aeromagnetic vertical gradient data based on a continuation conversion filtering algorithm.
- Author
-
Guo, Hua, Xu, Xi, Han, Song, Zheng, Qiang, and Liu, Haojun
- Subjects
RANDOM fields ,DATA conversion ,AREA measurement ,ALGORITHMS ,MAGNETIZATION - Abstract
Compared with aeromagnetic total-field data, aeromagnetic vertical gradient field data contain less low-frequency information. In this paper, a continuation conversion filtering algorithm is proposed to filter out part of the low-frequency information of the aeromagnetic total-field data so that these data can be better compared with the total-field data obtained from aeromagnetic gradient data conversion. We discuss the feasibility of single aeromagnetic vertical gradient measurement in areas where it is inconvenient to erect base stations. We design a simple model and a complex model with a background field and random noise to analyze the conversion effect. The model analysis shows that the effect of applying the algorithm depends heavily on the selection of upward continuation height. The magnetization intensity of the background field also affects the selection of continuation height. When the magnetization intensity of the background field is weak, the continuation height chosen is the same as the buried depth of the background field. If the magnetization intensity of the background field is strong, then the higher the continuation height, the better the effect will be. The conclusion of the model analysis is applied to the analysis of the measured aeromagnetic data. In addition, we can conclude that the effect on total-field data of conversion by the continuation conversion filtering algorithm is better than that of conversion from the aeromagnetic vertical gradient data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
168. A grayscale image enhancement algorithm based on dense residual and attention mechanism.
- Author
-
Ye, Meng, Yang, Shi'en, He, Yujun, and Peng, Zhangjun
- Subjects
DEEP learning ,IMAGE intensifiers ,GRAYSCALE model ,FEATURE extraction ,SIGNAL-to-noise ratio ,ALGORITHMS - Abstract
Deep learning shows great potential in low-light image enhancement, which can improve image brightness and contrast while keeping image natural. However, due to the lack of prior knowledge extracted manually or excessive amplification of noise, these methods result in poor quality of enhanced images. To solve these challenging problems, this paper proposes a dual-branch grayscale image enhancement network based on dense residual and attention mechanism. Low-light grayscale image is used as input. Firstly, features that fuse deep and shallow information are extracted through dense residual convolution branch network; secondly, texture features are extracted through U-Net branch network combined with attention mechanism; then the extracted features are integrated, and finally luminance is adjusted by a brightness adjustment module to output an enhanced grayscale image. In addition, a joint loss function is designed to measure the network training loss from brightness, texture, contrast and noise. A large number of quantitative and qualitative experiments on LOL and VE-LOL datasets show that the proposed method improves Peak Signal to Noise Ratio Index and Structural Similarity Index by 19.65 - 59.76% and 5.61 - 85.53%, respectively, compared with EnlightenGAN, KinD++, RUAS, LLFlow, etc. The proposed method is superior to the most famous methods, thanks to the deep feature extraction and fusion capability of dense residual convolutional network and texture extraction capability of U-Net network combined with attention mechanism. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
169. Algorithms Don’t Have A Future: On the Relation of Judgement and Calculation.
- Author
-
Stader, Daniel
- Abstract
This paper is about the opposite of judgement and calculation. This opposition has been a traditional anchor of critiques concerned with the rise of AI decision making over human judgement. Contrary to these approaches, it is argued that human judgement is not and cannot be replaced by calculation, but that it is human judgement that contextualises computational structures and gives them meaning and purpose. The article focuses on the epistemic structure of algorithms and artificial neural networks to find that they always depend on human judgement to be related to real life objects or purposes. By introducing the philosophical concept of judgement, it becomes clear that the property of judgement to provide meaning and purposiveness is based on the temporality of human life and the ambiguity of language, which quantitative processes lack. A juxtaposition shows that calculations and clustering can be used and referred to in more or less prejudiced and reflecting as well as opaque and transparent ways, but thereby always depend on human judgement. The paper clearly asserts that the transparency of AI is necessary for their autonomous use. This transparency requires the explicitness of the judgements that constitute these computational structures, thereby creating an awareness of the conditionality of such epistemic entities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
170. A combinatorial algorithm for computing the entire sequence of the maximum degree of minors of a generic partitioned polynomial matrix with 2×2 submatrices.
- Author
-
Iwamasa, Yuni
- Subjects
POLYNOMIALS ,MINORS ,MATRICES (Mathematics) ,ALGORITHMS ,BIPARTITE graphs ,INTEGERS - Abstract
In this paper, we consider the problem of computing the entire sequence of the maximum degree of minors of a block-structured symbolic matrix (a generic partitioned polynomial matrix) A = (A α β x α β t d α β ) , where A α β is a 2 × 2 matrix over a field F , x α β is an indeterminate, and d α β is an integer for α = 1 , 2 , ⋯ , μ and β = 1 , 2 , ⋯ , ν , and t is an additional indeterminate. This problem can be viewed as an algebraic generalization of the maximum weight bipartite matching problem. The main result of this paper is a combinatorial -time algorithm for computing the entire sequence of the maximum degree of minors of a (2 × 2) -type generic partitioned polynomial matrix of size 2 μ × 2 ν . We also present a minimax theorem, which can be used as a good characterization (NP ∩ co-NP characterization) for the computation of the maximum degree of minors of order k. Our results generalize the classical primal-dual algorithm (the Hungarian method) and minimax formula (Egerváry's theorem) for the maximum weight bipartite matching problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
171. Automatic detection method for tobacco beetles combining multi-scale global residual feature pyramid network and dual-path deformable attention.
- Author
-
Chen, Yuling, Li, Xiaoxia, Lv, Nianzu, He, Zhenxiang, and Wu, Bin
- Subjects
BEETLES ,LOCALIZATION (Mathematics) ,TOBACCO ,PIXELS ,PESTS ,NOISE ,ALGORITHMS - Abstract
Aiming at the problems of identifying storage pest tobacco pest beetles from images that have few object pixels and considerable image noise, and therefore suffer from lack of information and identifiable features, this paper proposes an automatic monitoring method of tobacco beetle based on Multi-scale Global residual Feature Pyramid Network and Dual-path Deformable Attention (MGrFPN-DDrGAM). Firstly, a Multi-scale Global residual Feature Pyramid Network (MGrFPN) is constructed to obtain rich high-level semantic features and more complete information on low-level features to reduce missed detection; Then, a Dual-path Deformable receptive field Guided Attention Module (DDrGAM) is designed to establish long-range channel dependence, guide the effective fusion of features and improve the localization accuracy of tobacco beetles by fitting the spatial geometric deformation features of and capturing the spatial information of feature maps with different scales to enrich the feature information in the channel and spatial. Finally, to simulate a real scene, a multi-scene tobacco beetle dataset is created. The dataset includes 28,080 images and manually labeled tobacco beetle objects. The experimental results show that under the framework of the Faster R-CNN algorithm, the detection precision and recall rate of this method can reach 91.4% and 98.4% when the intersection ratio (IoU) is 0.5. Compared with Faster R-CNN and FPN, when the intersection ratio (IoU) is 0.7, the detection precision is improved by 32.9% and 6.9%, respectively. The proposed method is superior to the current mainstream methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
172. Application of the QDST algorithm for the Schrödinger particle simulation in the infinite potential well.
- Author
-
Ostrowski, Marcin
- Subjects
POTENTIAL well ,ALGORITHMS ,QUANTUM computers ,SCHRODINGER equation - Abstract
This paper examines whether a quantum computer can efficiently simulate the time evolution of the Schrödinger particle in a one-dimensional infinite potential well. In order to solve the Schrödinger equation in the quantum register, an algorithm based on the Quantum Discrete Sine Transform (QDST) is applied. The paper compares the results obtained in this way with the results given by the previous method (based on the QFT algorithm). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
173. Offline and online task allocation algorithms for multiple UAVs in wireless sensor networks.
- Author
-
Ye, Liang, Yang, Yu, Meng, Weixiao, Wu, Xuanli, Li, Xiaoshuai, and Zhu, Rangang
- Subjects
WIRELESS sensor networks ,ONLINE algorithms ,ALGORITHMS ,DISASTER relief - Abstract
In recent years, UAV techniques are developing very fast, and UAVs are becoming more and more popular in both civilian and military fields. An important application of UAVs is rescue and disaster relief. In post-earthquake evaluation scenes where it is difficult or dangerous for human to reach, UAVs and sensors can form a wireless sensor network and collect environmental information. In such application scenarios, task allocation algorithms are important for UAVs to collect data efficiently. This paper firstly proposes an improved immune multi-agent algorithm for the offline task allocation stage. The proposed algorithm provides higher accuracy and convergence performance by improving the optimization operation. Then, this paper proposes an improved adaptive discrete cuckoo algorithm for the online task reallocation stage. By introducing adaptive step size transformation and appropriate local optimization operator, the speed of convergence is accelerated, making it suitable for real-time online task reallocation. Simulation results have proved the effectiveness of the proposed task allocation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
174. Adaptive guaranteed lower eigenvalue bounds with optimal convergence rates.
- Author
-
Carstensen, Carsten and Puttkammer, Sophie
- Subjects
GENERALIZATION ,AXIOMS ,A priori ,ARGUMENT ,ALGORITHMS - Abstract
Guaranteed lower Dirichlet eigenvalue bounds (GLB) can be computed for the m-th Laplace operator with a recently introduced extra-stabilized nonconforming Crouzeix–Raviart ( m = 1 ) or Morley ( m = 2 ) finite element eigensolver. Striking numerical evidence for the superiority of a new adaptive eigensolver motivates the convergence analysis in this paper with a proof of optimal convergence rates of the GLB towards a simple eigenvalue. The proof is based on (a generalization of) known abstract arguments entitled as the axioms of adaptivity. Beyond the known a priori convergence rates, a medius analysis is enfolded in this paper for the proof of best-approximation results. This and subordinated L 2 error estimates for locally refined triangulations appear of independent interest. The analysis of optimal convergence rates of an adaptive mesh-refining algorithm is performed in 3D and highlights a new version of discrete reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
175. Single-tag and multi-tag RFID data cleaning approach in edge computing.
- Author
-
Li, Chunlin, Jiang, Kun, Li, Xinyong, Zhang, Libin, and Luo, Youlong
- Subjects
EDGE computing ,RADIO frequency identification systems ,STATISTICAL smoothing ,MARKOV processes ,SMOOTHING (Numerical analysis) ,DATA scrubbing ,JUDGMENT (Psychology) ,TAGS (Metadata) ,ALGORITHMS - Abstract
As the performance of Radio Frequency Identification (RFID) devices is susceptible to the influence of the surrounding environment, making the original data collected by RFID devices with uncertain, of which missed data and redundant data are the main source of uncertainty data, these uncertainty data will seriously affect the quality of RFID upper layer applications. Therefore, to solve the uncertainty in RFID data, the original data must be cleaned. For the shortcomings of tag dynamics detection in the traditional data cleaning algorithm named Statistical Smoothing for Unreliable RFID data (SMURF), an adaptive sliding window-based data cleaning algorithm for RFID single-tag is proposed. The method takes into account the influence of tag speed on tag integrity judgment, and divides sliding sub-windows to accurately detect tag changes and then reasonably adjusts the sliding window size. In addition, considering that the average reading rate of tags is affected by the collision between multiple tags, an RFID multi-tag cleaning method based on twice-tag number estimation is proposed. The method accurately estimates the number of tags by the twice-tag estimation method, controls the read period by the Markov chain, and reduces the occurrence of multiple tag collisions by using an unequal time slot optimization method. Experimental results show that the proposed method in this paper can form a complete set of RFID data stream cleaning algorithms, which effectively reduces the uncertain data and improves the data accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
176. Improved adaptive-phase fuzzy high utility pattern mining algorithm based on tree-list structure for intelligent decision systems.
- Author
-
Chen, Jing, Liu, Aijun, Zhang, Hongjun, Yang, Shengyi, Zheng, Hui, Zhou, Ning, and Li, Peng
- Subjects
ARTIFICIAL intelligence ,SMART structures ,ALGORITHMS ,DATA mining ,BIG data - Abstract
With the rapid development of AI and big data mining technologies, computerized medical decision-making has become increasingly prominent. The aim of high-utility pattern mining (HUPM) is to discover meaningful patterns in medical databases that contribute to maximizing the utility from the perspective of diagnosis. However, HUPM pays less attention to the interpretability and explainability of these patterns in medical decision-making scenarios. This paper proposes a novel algorithm called the Improved fuzzy high-utility pattern mining (IF-HUPM) to address this problem. First, the paper applies a fuzzy preprocessing method to divide the fuzzy intervals of a medical quantitative data set, which enhances the fuzziness and interpretability of the data. Next, in the process of IF-HUPM, both fuzzy tree and list structures are employed to calculate fuzzy high-utility values. By combining the characteristics of the one-stage and two-stage algorithms of HUPM, an adaptive-phase Fuzzy HUPM hybrid frame is proposed. The experimental results demonstrate that the proposed IF-HUPM algorithm enhances both accuracy and efficiency and the mining process requires less time and space on average. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
177. Task recommendation based on user preferences and user-task matching in mobile crowdsensing.
- Author
-
Li, Xiaolin, Zhang, Lichen, Zhou, Meng, and Bian, Kexin
- Subjects
CROWDSENSING ,ALGORITHMS - Abstract
In mobile crowdsensing, the sensing platform recruits users to complete large-scale sensing tasks cooperatively. In order to guarantee the quality of sensing tasks, the platform needs to recommend suitable tasks to users. Existing task recommendation methods typically focus on unilateral factors, such as user preferences or task quality, leading to low platform utility and task acceptance rate respectively. To solve the above issue, this paper proposes a task recommendation method which takes both user preferences and user-task matching into consideration. Firstly, we apply the Deep Interest Network (DIN) in the context of mobile crowdsensing to recommend tasks according to user preferences. Secondly, the concept of user-task matching is introduced, in which both the task difficulty and the user reliability are taken into account. Finally, we propose task recommendation algorithms and conduct extensive experiments on a real dataset. The experimental results show that the proposed method can not only improve the utility of the platform significantly, but also improve the recommendation accuracy slightly under longer recommendation list. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
178. Discovering frequent parallel episodes in complex event sequences by counting distinct occurrences.
- Author
-
Ouarem, Oualid, Nouioua, Farid, and Fournier-Viger, Philippe
- Subjects
COUNTING ,ALGORITHMS ,DECISION making ,DEFINITIONS - Abstract
Event sequences are common types of data. Several episode mining algorithms have been developed to find episodes (subsequences of events) that appear frequently in an event sequence, with the aim of discovering useful knowledge for decision-making and predictions. However, most of these algorithms can only process simple event sequences (where, at most, one event occurs at each timestamp). In contrast, in many real-life applications, multiple events may occur at the same timestamp, resulting in complex event sequences. Moreover, numerous episode mining algorithms overestimate the frequency of episodes by counting the same events multiple times. As a solution, some algorithms have been designed to count only non-overlapping occurrences. Yet, it can be argued that this definition is too strict and discards many important events. To address these limitations, this paper presents an algorithm named EMDO (Episode Mining under Distinct Occurrences (EMDO) to find frequent episodes in a complex sequence by counting distinct occurrences. The proposed concept of distinct occurrences ensures that each event is not counted more than once but allows distinct occurrences to overlap. A second algorithm, called EMDO-P, is also presented in this paper to derive strong episode rules in event sequences from episodes found by EMDO. To the best of our knowledge, this is the first study on mining frequent episodes using a frequency definition based on distinct occurrences. The experimental results confirm that the proposed algorithms are efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
179. An energy-efficient and deadline-aware workflow scheduling algorithm in the fog and cloud environment.
- Author
-
Khaledian, Navid, Khamforoosh, Keyhan, Akraminejad, Reza, Abualigah, Laith, and Javaheri, Danial
- Subjects
PARTICLE swarm optimization ,WORKFLOW ,SIMULATED annealing ,WORKFLOW management systems ,ALGORITHMS ,SCHEDULING ,METAHEURISTIC algorithms ,FOG - Abstract
The Internet of Things (IoT) is constantly evolving. The variety of IoT applications has caused new demands to emerge on users' part and competition between computing service providers. On the one hand, an IoT application may exhibit several important criteria, such as deadline and runtime simultaneously, and it is confronted with resource limitations and high energy consumption on the other hand. This has turned to adopting a computing environment and scheduling as a fundamental challenge. To resolve the issue, IoT applications are considered in this paper as a workflow composed of a series of interdependent tasks. The tasks in the same workflow (at the same level) are subject to priorities and deadlines for execution, making the problem far more complex and closer to the real world. In this paper, a hybrid Particle Swarm Optimization and Simulated Annealing algorithm (PSO–SA) is used for prioritizing tasks and improving fitness function. Our proposed method managed the task allocation and optimized energy consumption and makespan at the fog-cloud environment nodes. The simulation results indicated that the PSO–SA enhanced energy and makespan by 5% and 9% respectively on average compared with the baseline algorithm (IKH-EFT). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
180. Autofocus algorithm using optimized Laplace evaluation function and enhanced mountain climbing search algorithm.
- Author
-
Jia, Dongyao, Zhang, Chuanwang, Wu, Nengkai, Zhou, Jialin, and Guo, Zhigang
- Subjects
SEARCH algorithms ,MOUNTAINEERING ,LAPLACIAN operator ,ALGORITHMS ,IMAGING systems ,IMAGE recognition (Computer vision) - Abstract
In the field of digital imaging systems, autofocus plays increasingly a vital role as a key technology. Autofocus poses a great challenge due to nosiy background and slow focusing speed. This paper presents a new focusing algorithm based on improved Laplacian operator and mountain-climb search algorithm. The clear image after focusing is more different in gray scale than the image without focusing, an image definition evaluation function combining local variance and Laplacian operator is proposed. Learning from the advantages of two-stage recognition in deep learning image recognition, an two-stage search algorithm based on mountain-climb search is designed to better fit the focusing curve near the extreme value of focusing evaluation function, improved mountain-climb search algorithm is divided into rough focusing and fine focusing. The method of rough focusing is used to determine a small focus area, and then fine focusing based on function approximation can greatly improve the efficiency of focus position.The experimental results indicate that this algorithm in this paper is superior to the traditional algorithm in time and accuracy, and the time of the autofocus is reduced by 76%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
181. An automatic algorithm for software vulnerability classification based on CNN and GRU.
- Author
-
Wang, Qian, Li, Yazhou, Wang, Yan, and Ren, Jiadong
- Subjects
RECURRENT neural networks ,CONVOLUTIONAL neural networks ,SEMANTICS ,AUTOMATIC classification ,ALGORITHMS - Abstract
In order to improve the management efficiency of software vulnerability classification, reduce the risk of system being attacked and destroyed, and save the cost for vulnerability repair, this paper proposes an automatic algorithm for Software Vulnerability Classification based on convolutional neural network (CNN) and gate recurrent unit neural network (GRU), called SVC-CG. It has conducted a fusion between the models of CNN and GRU according to their advantages (CNN is good at extracting local vector features of vulnerability text and GRU is good at extracting global features related to the context of vulnerability text). The merger of the features extracted by the complementary models can represent the semantic and grammatical information more accurately. Firstly, the Skip-gram language model based on Word2Vec is used to train and generate the word vector, and the words in each vulnerability text are mapped into the space with limited dimensions to represent the semantic information. Then the CNN is used to extract the local features of the text vector, and the GRU is used to extract the global features related to the text context. We combine two complementary models to construct a SVC-CG neural network algorithm, which can represent semantic and grammatical information more accurately to realize automatic classification of vulnerabilities. The experiment uses the vulnerability data from the national vulnerability database (NVD) to train and evaluate the SVC-CG algorithm. Through experimental comparison and analysis, the SVC-CG algorithm proposed in this paper has a good performance on Macro recall rate, Macro precision rate and Macro F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
182. Algorithm for Transmission Parameters Selection for Sporadic URLLC Traffic in Uplink.
- Author
-
Shashin, A. E., Belogaev, A. A., Krasilov, A. N., and Khorov, E. M.
- Subjects
MODULATION coding ,ALGORITHMS ,5G networks - Abstract
Ultra-Reliable Low-Latency Communications (URLLC) is a key service for fifth generation (5G) cellular systems. Typical requirements for this service are transmission reliability above 99.999% and latency below 1 ms. The paper considers a scenario with sporadic URLLC traffic in the uplink. To satisfy the strict latency requirements, user equipments (UEs) use the grant-free channel access method. According to this method, the base station allocates time–frequency resources and selects transmission parameters (i.e., the modulation and coding scheme, number of transmission attempts) in advance for each UE. To provide high resource utilization in the case of sporadic traffic, the base station allocates shared channel resources to several UEs, which can lead to interference between transmissions of different UEs. The paper proposes an algorithm for selection of transmission parameters for each UE that takes into account the channel conditions of each considered UE and the interference caused by transmissions of other UEs. Numerical results obtained with NS-3 show that the proposed algorithm increases the network capacity up to six times with respect to the algorithms presented in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
183. A novel multi-objective optimization of 3D printing adaptive layering algorithm based on improved NSGA-II and fuzzy set theory.
- Author
-
Wang, Xiaoqi and Cao, Jianfu
- Subjects
THREE-dimensional printing ,SET theory ,PARETO optimum ,ALGORITHMS ,ERROR rates ,FUZZY sets ,EVOLUTIONARY algorithms - Abstract
Uniform equal thickness layering is widely used in 3D printing, which cannot take into account the printing quality and printing efficiency. In this paper, a new adaptive layering algorithm based on multi-objective optimization is proposed for this problem. The algorithm comprehensively considers the surface features of the model, the slope and curvature of the contour, and establishes a multi-objective optimization model with print quality, print time, and feature constraints. And the Pareto optimal solution set of multi-objective optimization is solved by the improved non-dominated sorting genetic algorithm-II (NSGA-II), and the Pareto optimal solution that meets different printing requirements is selected by the Fuzzy-based weighted membership ranking method. Through comparative experiments, the method proposed in this paper reduces the volume error rate by 40.9% and the printing time by 33.3% compared with uniform layering, which can effectively improve the printing quality and printing efficiency. In addition, compared with the existing adaptive layering algorithms, it is also an algorithm with good comprehensive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
184. Exploring heterogeneous information networks and random walk with restart for academic search.
- Author
-
Chiang, Meng-Fen, Liou, Jiun-Jiue, Wang, Jen-Liang, Peng, Wen-Chih, and Shan, Man-Kwan
- Subjects
INFORMATION networks ,RANDOM walks ,QUERYING (Computer science) ,INFORMATION storage & retrieval systems ,GRAPH theory ,ELECTRONIC information resource searching ,ALGORITHMS - Abstract
In this paper, we explore heterogenous information networks in which each vertex represents one entity and the edges reflect linkage relationships. Heterogenous information networks contain vertices of several entity types, such as papers, authors and terms, and hence can fully reflect multiple linkage relationships among different entities. Such a heterogeneous information network is similar to a mixed media graph (MMG). By representing a bibliographic dataset as an MMG, the performance obtained when searching relevant entities (e.g., papers) can be improved. Furthermore, our academic search enables multiple-entity search, where a variety of entity search results are provided, such as relevant papers, authors and conferences, via a one-time query. Explicitly, given a bibliographic dataset, we propose a Global-MMG, in which a global heterogeneous information network is built. When a user submits a query keyword, we perform a random walk with restart (RWR) to retrieve papers or other types of entity objects. To reduce the query response time, algorithm Net-MMG (standing for NetClus-based MMG) is developed. Algorithm Net-MMG first divides a heterogeneous information network into a collection of sub-networks. Afterward, the Net-MMG performs a RWR on a set of selected relevant sub-networks. We implemented our academic search and conducted extensive experiments using the ACM Digital Library. The experimental results show that by exploring heterogeneous information networks and RWR, both the Global-MMG and Net-MMG achieve better search quality compared with existing academic search services. In addition, the Net-MMG has a shorter query response time while still guaranteeing good quality in search results. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
185. Proposal of a lightweight differential power analysis countermeasure method on elliptic curves for low-cost devices.
- Author
-
Gabsi, Souhir, Kortli, Yassin, Beroulle, Vincent, Kieffer, Yann, and Hamdi, Belgacem
- Subjects
ELLIPTIC curve cryptography ,ELLIPTIC curves ,SMART cards ,MULTIPLICATION ,ALGORITHMS - Abstract
Elliptical curves are dedicated for several security applications including Radio Frequency Identification (RFID) devices, smart cards, bankcards, etc. To guarantee effective security of such applications, these cryptographic systems require effective resistance to various types of physical attack. Differential Power-Analysis (DPA) attacks were considered the most efficient attacks against scalar multiplication calculation algorithms. In this paper, we propose a countermeasure method against the DPA attacks, for a scalar multiplication algorithm that is basically secure against Simple Power Analysis (SPA) and safe-error attacks. Our proposal is intended for Elliptic Curves Cryptosystems (ECC) algorithms dedicated to low cost applications. We first introduce the different types of side-channel attacks that ECC-based cryptographic algorithms can suffer, as well as their countermeasure methods existing in the literature. We then present an optimized hardware implementation of the most effective scalar multiplication algorithm against SPA and safe-error attacks. Finally, we present our proposed DPA countermeasure method and its effectiveness against other extensions of DPA attacks. Our proposed method is similar to the Basic Random Initial Point (BRIP) method except that the latter is only applicable for the left-to-right algorithm. The proposed method is based on the randomization of processed data during the computation of the scalar multiplication algorithm and prevents vulnerability to Zero-value Point Attack (ZPA), Refined Power analysis (RPA) attack and double attack. In the last part of our paper, we present comparative analysis in terms of computational cost between our proposed method and other countermeasure algorithms presented in the literature, such as Montgomery-ladder, the BRIP algorithm, the left-to-right algorithm and the Co-Z Mont-Ladder algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
186. A comparative study of energy routing algorithms to optimize energy transmission in energy internet.
- Author
-
Hebal, Sara, Mechta, Djamila, Harous, Saad, and Louail, Lemia
- Subjects
- *
RENEWABLE energy sources , *ROUTING algorithms , *ENERGY industries , *FOSSIL fuels , *ALGORITHMS - Abstract
The growing depletion of fossil fuels has led to the use of distributed renewable energy sources. This shift has altered the grid structure from centralized to distributed, where energy flows from multiple sources through multiple paths, besides producing a more competitive and dynamic energy market, posing new problems to power system management. To tackle the issue of effectively utilizing renewable energy sources, the energy internet (EI) was developed, in which devices are connected by energy routers. Therefore, efficiently transmitting energy within the EI has become a prominent topic in research. Despite the existence of algorithms and reviews in the literature on energy routing, there is still a lack of a thorough and comprehensive comparison of these existing algorithms. This paper classifies current energy routers and discusses and categorizes energy routing algorithms based on their methods. Additionally, the paper conducts extensive simulations to compare several energy routing algorithms in detail. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
187. Extension of the Directed Search Domain algorithm for multi-objective optimization to higher dimensions.
- Author
-
Yu, Boxi and Utyuzhnikov, Sergey
- Subjects
- *
SEARCH algorithms , *ALGORITHMS - Abstract
This paper addresses the problem of generating an evenly distributed set of Pareto solutions. It appears in real-life applications related to multi-objective optimization when it is important to represent the entire Pareto front with a minimal cost. There exist only a few algorithms which are able to tackle this problem in a general formulation. The Directed Search Domain (DSD) algorithm has proved to be efficient and quite universal. It has successfully been applied to different challengeable test cases. In this paper for the first time the DSD approach is systematically extended and applied to problems with higher dimensions. The modified algorithm does not have any formal limitation on the number of objective functions that is important for practical applications. The efficacy of the algorithm is demonstrated on a number of test cases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
188. Successive interference cancellation with multiple feedback in NOMA-enabled massive IoT network.
- Author
-
Bisen, Shubham, Bhatia, Vimal, and Brida, Peter
- Subjects
INTERNET of things ,COMPUTATIONAL complexity ,ALGORITHMS ,FAIRNESS ,SIGNS & symbols - Abstract
In this work, we propose a multiple feedback-based successive interference cancellation (SIC) scheme for an ultra-dense Internet of Things (IoT) device network. Non-orthogonal multiple access (NOMA) enables massive connectivity with improved user fairness and spectral efficiency and is envisaged as a multiple access technique for IoT devices. NOMA simultaneously serves multiple users within a single resource block, leading to unbounded yet regulated multi-user interference. SIC is widely adopted in the NOMA system to detect users' symbols. Nevertheless, multi-user interference and error propagation in the SIC layer are inherent challenges in NOMA. Recent studies have aimed to minimize interference and error propagation, imposing stringent conditions on the number of users and power allocation. Thus, this paper proposes novel multiple feedback-based SIC algorithms for the uplink multi-user NOMA scenarios that outperform the conventional SIC. Further, the proposed algorithm's performance is analyzed under the practical case of imperfect channel state information at the receiver node to validate the robustness. The computational complexity of multiple feedback SIC is compared with the conventional SIC. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
189. Three-phase balance relay using numerical techniques experimentally verified on synchronous machines.
- Author
-
Mahmoud, R. A. and Elwakil, E. S.
- Subjects
ELECTRIC potential measurement ,STATISTICAL correlation ,MACHINERY ,VOLTAGE ,ALGORITHMS - Abstract
In this paper, a multifunction three-phase balance relay based on normalized correlation coefficients is proposed to detect and estimate imbalances and perturbations in synchronous machine output signals. Furthermore, fresh definitions of imbalance and disturbance indicators derived using the correlation estimators are introduced, taking into account the changes in the waveform phase displacement, frequency, amplitude, and shape of the machine three-phase waves. Experimental tests are performed on a motor–generator set connected to a three-phase load, which is used to identify and evaluate the imbalance and disturbance conditions of the voltage and current measurements. Extensive tests for different fault types have been presented. The practical results show that the proposed protection can respond quickly to faults, and assess online the level of the imbalance/disturbance with high accuracy. Its running time is within one cycle. In addition, the proposed algorithm's reliability and accuracy are its most significant attributes, whose percentages exceed 96.6%. The present algorithm considers the impact of both negative and zero sequence components when measuring di-symmetry factors, while some conventional approaches merely rely on the negative sequence component computed for the machine voltages and currents. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
190. Ensemble Kalman inversion for image guided guide wire navigation in vascular systems.
- Author
-
Hanu, Matei, Hesser, Jürgen, Kanschat, Guido, Moviglia, Javier, Schillings, Claudia, and Stallkamp, Jan
- Subjects
PARAMETER estimation ,CARDIOVASCULAR system ,ALGORITHMS ,NAVIGATION - Abstract
This paper addresses the challenging task of guide wire navigation in cardiovascular interventions, focusing on the parameter estimation of a guide wire system using Ensemble Kalman Inversion (EKI) with a subsampling technique. The EKI uses an ensemble of particles to estimate the unknown quantities. However, since the data misfit has to be computed for each particle in each iteration, the EKI may become computationally infeasible in the case of high-dimensional data, e.g. high-resolution images. This issue can been addressed by randomised algorithms that utilize only a random subset of the data in each iteration. We introduce and analyse a subsampling technique for the EKI, which is based on a continuous-time representation of stochastic gradient methods and apply it to on the parameter estimation of our guide wire system. Numerical experiments with real data from a simplified test setting demonstrate the potential of the method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
191. Weak and strong convergence theorems for a new class of enriched strictly pseudononspreading mappings in Hilbert spaces.
- Author
-
Agwu, Imo Kalu, Işık, Hüseyin, and Igbokwe, Donatus Ikechi
- Subjects
HILBERT space ,BANACH spaces ,ALGORITHMS - Abstract
Let Ω be a nonempty closed convex subset of a real Hilbert space H . Let ℑ be a nonspreading mapping from Ω into itself. Define two sequences { ψ n } n = 1 ∞ and { ϕ n } n = 1 ∞ as follows: { ψ n + 1 = π n ψ n + (1 − π n) ℑ ψ n , ϕ n = 1 n ∑ n t = 1 ψ t , for n ∈ N , where 0 ≤ π n ≤ 1 , and π n → 0 . In 2010, Kurokawa and Takahashi established weak and strong convergence theorems of the sequences developed from the above Baillion-type iteration method (Nonlinear Anal. 73:1562–1568, 2010). In this paper, we prove weak and strong convergence theorems for a new class of (η , β) -enriched strictly pseudononspreading ((η , β) -ESPN) maps, more general than that studied by Kurokawa and W. Takahashi in the setup of real Hilbert spaces. Further, by means of a robust auxiliary map incorporated in our theorems, the strong convergence of the sequence generated by Halpern-type iterative algorithm is proved thereby resolving in the affirmative the open problem raised by Kurokawa and Takahashi in their concluding remark for the case in which the map ℑ is averaged. Some nontrivial examples are given, and the results obtained extend, improve, and generalize several well-known results in the current literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
192. Sinc-Galerkin method and a higher-order method for a 1D and 2D time-fractional diffusion equations.
- Author
-
Luo, Man, Xu, Da, and Pan, Xianmin
- Subjects
GALERKIN methods ,FINITE differences ,CHARACTERISTIC functions ,ALGORITHMS - Abstract
In this article, a new numerical algorithm for solving a 1-dimensional (1D) and 2-dimensional (2D) time-fractional diffusion equation is proposed. The Sinc-Galerkin scheme is considered for spatial discretization, and a higher-order finite difference formula is considered for temporal discretization. The convergence behavior of the methods is analyzed, and the error bounds are provided. The main objective of this paper is to propose the error bounds for 2D problems by using the Sinc-Galerkin method. The proposed method in terms of convergence is studied by using the characteristics of the Sinc function in detail with optimal rates of exponential convergence for 2D problems. Some numerical experiments validate the theoretical results and present the efficiency of the proposed schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
193. Threshold-driven K-means sector clustering algorithm for wireless sensor networks.
- Author
-
Zeng, Bo, Li, Shanshan, and Gao, Xiaofeng
- Subjects
K-means clustering ,NETWORK performance ,ENERGY consumption ,WIRELESS sensor networks ,ALGORITHMS - Abstract
The clustering algorithm is an effective method for developing energy efficiency routing protocol for wireless sensor networks (WSNs). In clustered WSNs, cluster heads must handle high traffic, thus consuming more energy. Therefore, forming balanced clusters and selecting optimal cluster heads are significant challenges. The paper proposes a sector clustering algorithm based on K-means called KMSC. KMSC improves efficiency and balances the cluster size by employing symmetric dividing sectors in conjunction with K-means. For the selection of cluster heads (CHs), KMSC uses the residual energy and distance to calculate the weight of the node, then selects the node with the highest weight as CH. A hybrid single-hop and multi-hop communication is utilized to reduce long-distance transmissions. Furthermore, the impact of the number of sectors, the threshold for clustering, and the network size on the performance of KMSC has been explored. The simulation results show that KMSC outperforms EECPK-means, K-means, TSC, LSC, and SEECP in terms of FND, HND, and LND. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
194. Iterative approximate Byzantine consensus in arbitrary directed graphs.
- Author
-
Tseng, Lewis, Liang, Guanfeng, and Vaidya, Nitin H.
- Subjects
- *
GRAPH algorithms , *ALGORITHMS , *MATRICES (Mathematics) , *ARGUMENT , *DIRECTED graphs - Abstract
This paper identifies necessary and sufficient conditions for the existence of iterative algorithms that achieve approximate Byzantine consensus in arbitrary directed graphs, where each directed link represents a communication channel between a pair of nodes. The class of iterative algorithms considered in this paper ensures that, after each iteration of the algorithm, the state of each fault-free node remains in the convex hull of the states of the fault-free nodes at the end of the previous iteration. We present the necessary and sufficient condition for the existence of such iterative consensus algorithms in synchronous arbitrary point-to-point networks in presence of Byzantine faults in two different equivalent forms. We prove the necessity using an indistinguishability argument. For sufficiency, we develop a proof framework, which first uses a series of "transition matrices" to model the state evolution of the fault-free nodes using our algorithm, and then proves the correctness by identifying important properties of the matrices. The proof framework is useful for other iterative fault-tolerant algorithms. We discuss the extensions to asynchronous systems and the Byzantine links fault model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
195. ADE: advanced differential evolution.
- Author
-
Abbasi, Behzad, Majidnezhad, Vahid, and Mirjalili, Seyedali
- Subjects
- *
CHAOS theory , *ALGORITHMS , *METAHEURISTIC algorithms - Abstract
This paper proposes a metaheuristic algorithm, called advanced differential evolution (ADE), by improving the DE algorithm. The ADE algorithm was developed with the goal of creating an optimization framework that addresses the challenges of exploration and exploitation balance, avoiding local minima, utilizing chaos theory for diverse initialization, and improving solution quality and convergence speed. By incorporating these features, ADE aims to enhance the effectiveness of optimization processes. The proposed algorithm utilizes chaos theory to generate the initial population, which is subsequently divided into two sub-populations with adaptive sizes. The size of each sub-population is determined using a formula based on the number of iterations during the algorithm's execution. The first sub-population has a larger size in the beginning and the second one has a smaller size, but the total size of these two populations is always constant. The main contribution of this paper is the proposal of two novel improved differential evolution algorithms, namely MDE1 and MDE2, which are utilized for exploration within these sub-populations. The proposed ADE is tested on 29 well-known benchmarks and six engineering problems, and the results are compared with seven other algorithms. Various statistical experiments are carried out showing that the proposed algorithm provides significant superiority over other well-known algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
196. Decoupling Anomaly Discrimination and Representation Learning: Self-supervised Learning for Anomaly Detection on Attributed Graph.
- Author
-
Hu, YanMing, Chen, Chuan, Deng, BoWen, Lai, YuJing, Lin, Hao, Zheng, ZiBin, and Bian, Jing
- Subjects
GRAPH neural networks ,ALGORITHMS - Abstract
Anomaly detection on attributed graphs is a crucial topic for practical applications. Existing methods suffer from semantic mixture and imbalance issue because they commonly optimize the model based on the loss function for anomaly discrimination, mainly focusing on anomaly discrimination and ignoring representation learning. Graph Neural networks based techniques usually tend to map adjacent nodes into close semantic space. However, anomalous nodes commonly connect with numerous normal nodes directly, conflicting with the assortativity assumption. Additionally, there are far fewer anomalous nodes than normal nodes, leading to the imbalance problem. To address these challenges, a unique algorithm, decoupled self-supervised learning for anomaly detection (DSLAD), is proposed in this paper. DSLAD is a self-supervised method with anomaly discrimination and representation learning decoupled for anomaly detection. DSLAD employs bilinear pooling and masked autoencoder as the anomaly discriminators. By decoupling anomaly discrimination and representation learning, a balanced feature space is constructed, in which nodes are more semantically discriminative, as well as imbalance issue can be resolved. Experiments conducted on various six benchmark datasets reveal the effectiveness of DSLAD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
197. Rational option of optimum parameters of robust design of hybrid of PMOO with sequential uniform algorithm.
- Author
-
Zheng, Maosheng and Yu, Jie
- Subjects
PROBABILITY theory ,ALGORITHMS ,DESIGN - Abstract
In this paper, reasonable regulation of optimum parameters in sequential uniform design (algorithm) is developed by means of probabilistic multi–objective optimization (PMOO) in term of total preferable probability, which aims to conduct rational option in the subsequent deep optimization with a series of provisional ″optimum status″ candidates. The provisional ″optimum statuses″ were produced in the sequential uniform algorithm of subsequent deep optimization in each step, which is in turn used to form a ″special point set″ in this study, the total preferable probability is evaluated for the ″special point set″. The final optimum status is with the highest total preferable probability of the ″special point set″ comparatively. Besides, under condition of ″target value being the best″, both discrepancy of average value Y ¯ of a response from its target value Y
0 (ε =| Y ¯ −Y0 |) and averaged deviation γ of the actual response value Y from the target value Y0 are taken as the dual individual response objectives for robust design simultaneously. Two examples are given to illuminate the proposed procedure. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
198. An improved cuckoo search algorithm for global optimization.
- Author
-
Tian, Yunsheng, Zhang, Dan, Zhang, Hongbo, Zhu, Juan, and Yue, Xiaofeng
- Subjects
SWARM intelligence ,SEARCH algorithms ,GLOBAL optimization ,GENETIC algorithms ,ALGORITHMS - Abstract
Cuckoo search (CS) algorithm is a classical swarm intelligence algorithm widely used in a variety of engineering optimization problems. However, its search accuracy and convergence speed still have a lot of room for improvement. In this paper, an improved version of the CS algorithm based on intelligent perception strategy, adaptive invasive weed optimization (AIWO), and elite cross strategy, called IIC-CS is proposed. Firstly, the intelligent perception strategy can update the value according to the searching state. Moreover, the CS is hybridized with the AIWO to improve the searching performance of the algorithm. Additionally, the elite cross strategy is employed to enhance the exploration capability and exploitation capability of the algorithm. Combining the improvements of these three methods, the performance of the CS algorithm is significantly improved. Meanwhile, 23 classical benchmark functions, some CEC2014 and CEC2018 benchmark functions are used to test the search accuracy and convergence rate of the IIC-CS. Furthermore, some classical or state-of-the-art algorithms such as the genetic algorithm (GA), particle swarm optimization (PSO), bat algorithm (BA), ant lion optimizer (ALO) and cuckoo search (CS) algorithm, invasive weed optimization (IWO), integrated cuckoo search optimizer (ICSO) and improved island cuckoo search (iCSPM2) are used to make comparisons. Through the statistical results of the experiments, we find that the IIC-CS algorithm can achieve better results on most benchmark functions compared to other algorithms, thus demonstrating the effectiveness of the improvements and the superiority of the IIC-CS algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
199. Efficient fog node placement using nature-inspired metaheuristic for IoT applications.
- Author
-
Naouri, Abdenacer, Nouri, Nabil Abdelkader, Khelloufi, Amar, Sada, Abdelkarim Ben, Ning, Huansheng, and Dhelim, Sahraoui
- Subjects
INTELLIGENT networks ,NETWORK performance ,QUALITY of service ,COMPUTATIONAL complexity ,ALGORITHMS - Abstract
Managing the explosion of data from the edge to the cloud requires intelligent supervision, such as fog node deployments, which is an essential task to assess network operability. To ensure network operability, the deployment process must be carried out effectively regarding two main factors: connectivity and coverage. The network connectivity is based on fog node deployment, which determines the network's physical topology, while the coverage determines the network accessibility. Both have a significant impact on network performance and guarantee the network quality of service. Determining an optimum fog node deployment method that minimizes cost, reduces computation and communication overhead, and provides a high degree of network connection coverage is extremely hard. Therefore, maximizing coverage and preserving network connectivity is a non-trivial problem. In this paper, we propose a fog deployment algorithm that can effectively connect the fog nodes and cover all edge devices. Firstly, we formulate fog deployment as an instance of multi-objective optimization problems with a large search space. Then, we leverage Marine Predator Algorithm (MPA) to tackle the deployment problem and prove that MPA is well-suited for fog node deployment due to its rapid convergence and low computational complexity, compared to other population-based algorithms. Finally, we evaluate the proposed algorithm on a different benchmark of generated instances with various fog scenario configurations. Our algorithm outperforms state-of-the-art methods, providing promising results for optimal fog node deployment. It demonstrates a 50% performance improvement compared to other algorithms, aligning with the No Free Lunch Theorem (NFL Theorem) Theorem's assertion that no algorithm has a universal advantage across all problem domains. This underscores the significance of selecting tailored algorithms based on specific problem characteristics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
200. Artificial intelligence assisted IoT-fog based framework for emergency fire response in smart buildings.
- Author
-
Saini, Munish, Sengupta, Eshan, and Thakur, Suraaj
- Subjects
FIRE management ,ARTIFICIAL intelligence ,EMERGENCY management ,INTERNET of things ,FLOOR plans ,ALGORITHMS - Abstract
Anthropogenic hazards are unrelenting threat to lives and property, with human irresponsibility emerging as a leading source of urban as well as industrial fires. The complexity of urban structures and crowded layouts make these kinds of fires more lethal. This paper presents an Artificial Intelligence (AI) based framework designed for smart buildings as a solution to the devastating obstacles caused by fire crises. Our system creates a 3D model of the building using floor plans and the A* algorithm for escape route identification. The proposed framework includes a YOLO-based smart monitoring system for the identification and counting of people caught in a fire, with the ability to distinguish between conscious and unconscious persons. The proposed system informs inhabitants in the case of a fire and directs them to the closest exit for a safe evacuation. Moreover, fire and rescue officials receive real-time information on affected persons, such as the number and location of adults and children who are conscious and unconscious. Perhaps most significantly, the suggested framework performs exceptionally well, scoring 96% for precision and 98% for recall in the detection of fire and humans. These findings highlight the effectiveness of the model in locating people within infrastructures affected by fire. The framework considerably outperforms the most advanced algorithms in terms of speed and efficiency for shortest path detection, greatly improving the ability of fire rescue teams to quickly find and aid residents who are trapped in a fire. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.