42,413 results
Search Results
102. Multi-edge collaborative offloading and energy threshold-based task migration in mobile edge computing environment.
- Author
-
Li, Chunlin, Cai, Qianqian, and Luo, Youlong
- Subjects
MOBILE computing ,EDGE computing ,ENERGY consumption ,ALGORITHMS ,QUEUING theory ,THRESHOLD energy ,GENETIC algorithms - Abstract
Computation offloading and service migration are two major research hotspots in the mobile edge computing (MEC) environment. However, in the existing MEC architecture, the idle computing resources of offsite edge servers are not fully utilized, which leads to the problem of high overall system time and energy costs. In this paper, we propose a multi-edge collaborative computation offloading strategy for this problem. The strategy analyzes and calculates the energy consumption and latency cost of task execution for local terminals, edge servers and central cloud, constructs a computation offloading model with the weighted sum of latency and energy consumption as the optimization objective, and then solves the model using an improved genetic algorithm to obtain the best computation offloading decision. On the other hand, the mobility of users in the MEC environment leads to service migration, which leads to unbalanced load on the edge servers and network congestion, etc. This paper proposes an energy threshold-based task migration strategy. The strategy analyzes the time and energy consumption of service execution and data transmission, designs an edge server selection algorithm based on the energy consumption threshold, constructs a service migration model, and finally solves the optimal service migration strategy by improving the genetic algorithm. Experimental results show that the multi-edge collaborative computation offloading strategy proposed can significantly improve the performance of data transfer cost, energy consumption, and task completion time. The proposed migration strategy based on energy consumption threshold can significantly improve the performance of mobile server energy consumption, service completion time, and data transfer energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
103. Preface: Special issue on "Understanding of evolutionary optimization behavior", Part 1.
- Author
-
Blum, Christian, Eftimov, Tome, and Korošec, Peter
- Subjects
BEES algorithm ,SUBMODULAR functions ,ARTIFICIAL intelligence ,EVOLUTIONARY computation ,ALGORITHMS ,PROBLEM solving - Abstract
Understanding of optimization algorithm's behavior is a vital part that is needed for quality progress in the field of stochastic optimization algorithms. To be able to overcome this deficiency, we need to establish new standards for understanding optimization algorithm behavior, which will provide understanding of the working principles behind the stochastic optimization algorithms. In their paper I Evolutionary algorithms and submodular functions: benefits of heavy-tailed mutations i , Quinzan et al. develop suitable Evolutionary Algorithms (EAs) to tackle submodular optimization problems. The paper I Improving convergence in swarm algorithms by controlling range of random movement i by Chaudhary and Banati studies the applicability of the IS technique over different swarm algorithms employing different random distributions. [Extracted from the article]
- Published
- 2021
- Full Text
- View/download PDF
104. Algorithms of motion planning for a six-legged walking machine.
- Author
-
Budanov, V.
- Subjects
- *
PAPER , *EXPERIMENTS , *ALGORITHMS , *MOTION , *MACHINERY , *LEG , *LABORATORIES - Abstract
This paper investigates algorithms of motion planning for a six-legged walking machine over complex terrain. Application of the Rodrigues-Hamilton parameters for describing the orientation of the body enables one to develop algorithms of motion planning for the body and legs in an absolute coordinate frame with automatic adaptation to the surface. Experiments with a laboratory scale walking machine have demonstrated the efficiency of proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
105. An improved artificial bee colony algorithm based on whale optimization algorithm for data clustering.
- Author
-
Rahnema, Nouria and Gharehchopogh, Farhad Soleimanian
- Subjects
BEES algorithm ,MATHEMATICAL optimization ,ALGORITHMS ,K-means clustering ,WHALES ,STATISTICS - Abstract
Data clustering is one of the branches of unsupervised learning and it is a process whereby the samples are divided into categories whose members are similar to each other. The K-means algorithm is a simple and fast clustering technique, but it has many initial problems, for example, it depends heavily on the initial value for better clustering. Moreover, it is susceptible to outliers and unbalanced clusters. The artificial bee colony (ABC) algorithm is one of the meta-heuristic algorithms that is used nowadays to solve many optimization problems including clustering and the fundamental problem of this algorithm is exploration and late convergence. In this paper, to solve the problem of exploration and late convergence in ABC are used Random Memory (RM) and Elite Memory (EM) called ABCWOA algorithm. RM in the ABCWOA algorithm has used the search stage for the bait in the whale optimization algorithm (WOA) and EM is also used to increase convergence. In addition, we control the use of EM dynamically. Finally, the proposed method was implemented on ten standard datasets from the UCI Machine Learning Database for evaluation. Moreover, it was compared in terms of statistical criteria and analysis of variance (ANOVA) test with basic ABC and WOA, vortex search (VS) algorithm, butterfly optimization algorithm (BOA), crow search (CS) algorithm, and cuckoo search algorithm (CSA). The simulation results showed that the degree of convergence maintained its performance by increasing the number of repetitions of the proposed method, but the ABC algorithm has shown poor performance by increasing the repetition of performance. ANOVA results also confirmed that the ABCWOA algorithm has a positive effect on the population and it contains less noise than other comparative algorithms. The ABCWOA algorithm show that the ABCWOA algorithm performs better than other meta-heuristic algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
106. Fault diagnosis model of rolling bearing based on parameter adaptive AVMD algorithm.
- Author
-
Li, Meixuan, Yan, Chun, Liu, Wei, Liu, Xinhong, Zhang, Mengchao, and Xue, Jiankai
- Subjects
ROLLER bearings ,FAULT diagnosis ,ALGORITHMS ,SEARCH algorithms ,FEATURE extraction ,HILBERT-Huang transform ,INDEX numbers (Economics) - Abstract
Aiming at the intrinsic aspect that the weak features of the early fault information of rolling bearings are not easy to extract, a parameter Adaptive Variational Modal Decomposition (AVMD) based algorithm is proposed for bearing fault signal feature extraction. Since the number of Variational Modal Decomposition (VMD) decomposition and penalty factor play an important role in VMD decomposition effect, the irregularities in the selection of these two influencing parameters are analyzed. We exploit the stronger global search capability of the improved sparrow search algorithm (LSSA) for adaptive parameter selection of the VMD algorithm. In this paper, the Levy flight algorithm is introduced and chaos is added to initialize sparrow population position to prevent sparrow from falling into the disadvantage of local optimum in the search process. In addition, this paper also combines the maximum kurtosis index, the minimum envelope entropy index and the number of iterations of VMD to form the objective function of the LSSA. The VMD algorithm with optimized parameters decomposes the signal to be measured, the decomposed IMFs was reconstructed, finally the validity of the model was verified by calculating 20 features (time domain and frequency domain) of the reconstructed signal as the input vector of the SVM classifier. Finally, the feasibility of this model in fault diagnosis of rolling bearing is verified using simulation and example experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
107. Carrier phase recovery of LDPC-coded systems based on the likelihood difference algorithm.
- Author
-
Imad, Rodrigue and Houcke, Sebastien
- Subjects
PARITY-check matrix ,COST functions ,TWO-dimensional bar codes ,ALGORITHMS ,EXPECTATION-maximization algorithms ,SIGNAL-to-noise ratio - Abstract
The problem of blind phase offset recovery of low density parity-check (LDPC)-coded systems is considered in this paper. We propose a new algorithm of phase offset estimation that involves the computation and maximization of a likelihood difference (LD)-based cost function calculated from the parity-check matrix of the code. We show in this paper that the new cost function has a simplified form compared to another algorithm proposed in the literature and presents similar estimation performance. Mean squared error (MSE) curves show very good performance of the proposed phase offset estimation algorithm, even at low signal-to-noise ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
108. Risk decision analysis of commercial insurance based on neural network algorithm.
- Author
-
Wang, Shanshan and Zhao, Zhenwang
- Subjects
BUSINESS insurance ,DECISION making ,RISK assessment ,ACTUARIAL risk ,ALGORITHMS - Abstract
To improve the effect of commercial insurance risk decision, this paper applies neural network algorithms to commercial insurance risk decision under the guidance of machine learning ideas, and selects the neural network algorithm based on the actual situation. Moreover, this paper analyzes the nature of risks of commercial insurance, analyzes the types of risks and risk relevance, constructs a commercial insurance risk decision model based on neural network algorithms, and determines the system process. In addition, this paper uses a combination method of qualitative and quantitative to identify the influencing factors of risk estimation to obtain relevant influencing factors, and verify the model proposed in this paper in combination with experimental research. From the experimental research results, it can be seen that the commercial insurance risk decision system based on neural network algorithm is very good in terms of decision effect. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
109. Algorithm for Transmission Parameters Selection for Sporadic URLLC Traffic in Uplink.
- Author
-
Shashin, A. E., Belogaev, A. A., Krasilov, A. N., and Khorov, E. M.
- Subjects
MODULATION coding ,ALGORITHMS ,5G networks - Abstract
Ultra-Reliable Low-Latency Communications (URLLC) is a key service for fifth generation (5G) cellular systems. Typical requirements for this service are transmission reliability above 99.999% and latency below 1 ms. The paper considers a scenario with sporadic URLLC traffic in the uplink. To satisfy the strict latency requirements, user equipments (UEs) use the grant-free channel access method. According to this method, the base station allocates time–frequency resources and selects transmission parameters (i.e., the modulation and coding scheme, number of transmission attempts) in advance for each UE. To provide high resource utilization in the case of sporadic traffic, the base station allocates shared channel resources to several UEs, which can lead to interference between transmissions of different UEs. The paper proposes an algorithm for selection of transmission parameters for each UE that takes into account the channel conditions of each considered UE and the interference caused by transmissions of other UEs. Numerical results obtained with NS-3 show that the proposed algorithm increases the network capacity up to six times with respect to the algorithms presented in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
110. 2015 JETTA-TTTC Best Paper Award.
- Subjects
- *
PERIODICAL articles , *ALGORITHMS , *AWARDS - Abstract
The article announces that the paper "Blow Cost Sparse Multiband Signal Characterization Using Asynchronous Multi-Rate Sampling: Algorithms and Hardware" has recieved the Best Paper Award, appeared in the periodical "Journal of Electronic Testing: Theory and Applications," along with an abstract.
- Published
- 2016
- Full Text
- View/download PDF
111. A detection method for the ridge beast based on improved YOLOv3 algorithm.
- Author
-
Hou, Miaole, Hao, Wuchen, Dong, Youqiang, and Ji, Yuhang
- Subjects
BUILDING repair ,DATA mining ,ALGORITHMS ,FEATURE extraction ,HERITAGE tourism ,DECEPTION - Abstract
The ridge beast is a beast placed on the ridge of the roof of ancient Chinese buildings, not only has a decorative function, and has a strict hierarchical meaning, the number and form of the ridge beast placed on different levels of buildings are strictly limited. The detection technology of ridge beast decorative parts has important application value in the fields of fine 3D reconstruction of ancient buildings, historical dating and cultural and tourism services. Aiming at the problem of poor detection performance of traditional detection algorithms due to high texture similarity and poor discrimination of ridge beast, this paper proposed an improved YOLOv3 based detection algorithm for ridge beast decorative pieces. In terms of basic network improvement, local features are aggregated to the deep separable convolution internal embedding summation layer, and point convolution is used to connect the channel information of original features and aggregated features, so as to expand the receptive field and learn more diverse features. The residual structure of the feature extraction network was constructed by using the convolution, and the extraction effect of the model on the fine-grained features of the ridge beast was optimized, so that the detection accuracy was improved. In the prediction head improvement of the model, the original linear structure was reconstructed, and the extrusion and excitation modules were introduced to model the channel relationship of multi-scale feature map, which suppressed the response of interference signals and made the feature more directivity. The parallel 1 × 1 and 3 × 3 convolution are used to construct a multi-size convolution structure, which enhances the semantic information extraction ability of the model and further improves the detection effect. Experiments were conducted on the constructed ridge-beast dataset, and the results showed that the mAP of the improved algorithm can reach 86.48%, which is 3.05% higher than YOLO-v3, and the model parameters are reduced by 70%, which has a better detection performance and can provide a reference for the automated detection of ancient building components. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
112. The potential of point-of-care diagnostics to optimise prehospital trauma triage: a systematic review of literature.
- Author
-
Stojek, Leonard, Bieler, Dan, Neubert, Anne, Ahnert, Tobias, and Imach, Sebastian
- Subjects
ONLINE information services ,MEDICAL triage ,MEDICAL information storage & retrieval systems ,POINT-of-care testing ,RESEARCH funding ,DESCRIPTIVE statistics ,WOUNDS & injuries ,DATA analysis software ,MEDLINE ,ADVANCED trauma life support ,EMERGENCY medicine ,ALGORITHMS - Abstract
Purpose: In the prehospital care of potentially seriously injured patients resource allocation adapted to injury severity (triage) is a challenging. Insufficiently specified triage algorithms lead to the unnecessary activation of a trauma team (over-triage), resulting in ineffective consumption of economic and human resources. A prehospital trauma triage algorithm must reliably identify a patient bleeding or suffering from significant brain injuries. By supplementing the prehospital triage algorithm with in-hospital established point-of-care (POC) tools the sensitivity of the prehospital triage is potentially increased. Possible POC tools are lactate measurement and sonography of the thorax, the abdomen and the vena cava, the sonographic intracranial pressure measurement and the capnometry in the spontaneously breathing patient. The aim of this review was to assess the potential and to determine diagnostic cut-off values of selected instrument-based POC tools and the integration of these findings into a modified ABCDE based triage algorithm. Methods: A systemic search on MEDLINE via PubMed, LIVIVO and Embase was performed for patients in an acute setting on the topic of preclinical use of the selected POC tools to identify critical cranial and peripheral bleeding and the recognition of cerebral trauma sequelae. For the determination of the final cut-off values the selected papers were assessed with the Newcastle–Ottawa scale for determining the risk of bias and according to various quality criteria to subsequently be classified as suitable or unsuitable. PROSPERO Registration: CRD 42022339193. Results: 267 papers were identified as potentially relevant and processed in full text form. 61 papers were selected for the final evaluation, of which 13 papers were decisive for determining the cut-off values. Findings illustrate that a preclinical use of point-of-care diagnostic is possible. These adjuncts can provide additional information about the expected long-term clinical course of patients. Clinical outcomes like mortality, need of emergency surgery, intensive care unit stay etc. were taken into account and a hypothetic cut-off value for trauma team activation could be determined for each adjunct. The cut-off values are as follows: end-expiratory CO
2 : < 30 mm/hg; sonography thorax + abdomen: abnormality detected; lactate measurement: > 2 mmol/L; optic nerve diameter in sonography: > 4.7 mm. Discussion: A preliminary version of a modified triage algorithm with hypothetic cut-off values for a trauma team activation was created. However, further studies should be conducted to optimize the final cut-off values in the future. Furthermore, studies need to evaluate the practical application of the modified algorithm in terms of feasibility (e.g. duration of application, technique, etc.) and the effects of the new algorithm on over-triage. Limiting factors are the restriction with the search and the heterogeneity between the studies (e.g. varying measurement devices, techniques etc.). [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
113. The object migration automata: its field, scope, applications, and future research challenges.
- Author
-
Oommen, B. John, Omslandseter, Rebekka Olsson, and Jiao, Lei
- Subjects
ARTIFICIAL intelligence ,NP-hard problems ,ROBOTS ,ALGORITHMS ,MACHINE theory ,PARTITIONS (Mathematics) - Abstract
Partitioning, in and of itself, is an NP-hard problem. Prior to the Artificial Intelligence (AI)-based solutions, it was solved in the 1970s by optimization-based strategies. However, AI-based solutions appeared in the 1980s in a pioneering way, by using a Learning Automaton (LA)-motivated strategy known as the so-called Object Migrating Automaton (OMA). Although the OMA and its derivatives have been used in numerous applications since then, the basic kernel has remained the same. Because the number of possible partitions in a partitioning problem can be combinatorially exponential and the underlying tasks are NP-hard, the most advanced OMA algorithms could, until recently, only solve issues involving equally sized groups. Due to our recent innovations cited in the body of this paper, the enhanced OMA now also handles non-equally sized groups. Earlier, we had presented in Omslandseter (Pattern Anal Appl, 2023), a comprehensive survey of the state-of-the-art enhancements of the best-known OMA. We believe that these results will be the benchmark for a few decades and that it will be very hard to beat these results. This is a companion paper, intended to augment the contents of Omslandseter (Pattern Anal Appl, 2023). In this paper, we first discuss the OMA's prior applications, its historical and current innovations, and the OMA-based algorithms' relevance to societal needs. We also provide well-specified guidelines for future researchers so that they can use them for unresolved tasks, and also develop further advancements. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
114. Feature selection algorithm for usability engineering: a nature inspired approach.
- Author
-
Jain, Rajat, Joseph, Tania, Saxena, Anvita, Gupta, Deepak, Khanna, Ashish, Sagar, Kalpna, and Ahlawat, Anil K.
- Subjects
FEATURE selection ,OPTIMIZATION algorithms ,METAHEURISTIC algorithms ,COMPUTER software quality control ,ALGORITHMS ,COMPUTER software development - Abstract
Software usability is usually used in reference to the hierarchical software usability model by researchers and is an important aspect of user experience and software quality. Thus, evaluation of software usability is an essential parameter for managing and regulating a software. However, it has been difficult to establish a precise evaluation method for this problem. A large number of usability factors have been suggested by many researchers, each covering a set of different factors to increase the degree of user friendliness of a software. Therefore, the selection of the correct determining features is of paramount importance. This paper proposes an innovative metaheuristic algorithm for the selection of most important features in a hierarchical software model. A hierarchy-based usability model is an exhaustive interpretation of the factors, attributes, and its characteristics in a software at different levels. This paper proposes a modified version of grey wolf optimisation algorithm (GWO) termed as modified grey wolf optimization (MGWO) algorithm. The mechanism of this algorithm is based on the hunting mechanism of wolves in nature. The algorithm chooses a number of features which are then applied to software development life cycle models for finding out the best among them. The outcome of this application is also compared with the conventional grey wolf optimization algorithm (GWO), modified binary bat algorithm (MBBAT), modified whale optimization algorithm (MWOA), and modified moth flame optimization (MMFO). The results show that MGWO surpasses all the other relevant optimizers in terms of accuracy and produces a lesser number of attributes equal to 8 as compared to 9 in MMFO and 12 in MBBAT and 19 in MWOA. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
115. Fast calibration stitching algorithm for underwater camera.
- Author
-
Wang, Zhanhua, Tang, Zhijie, Huang, Jingke, and Li, Jianda
- Subjects
UNDERWATER cameras ,ALGORITHMS ,IMAGE registration ,COORDINATES - Abstract
Underwater environment is complex and changeable. In order to obtain more underwater environment information. The larger the field of view of underwater images collected by ROV, the more information is contained. Effective methods to obtain large field of view include fish-eye lens and image stitching. In order to obtain larger field information, we combine the two methods and propose a splicing algorithm that can be applied to fish-eye lenses. This algorithm includes two parts, the first part is correction the fish-eye images, on the basis of the traditional chessboard correction method to improve, this paper put forward a new adaptive gray level method, this method can keep more angular point features, can be more accurate extraction of checkerboard angular point, will be further accurate correction result. In order to achieve real-time underwater patchwork effect. For stitching the corrected images, this paper proposes a fast stitching algorithm (FASTITCH), in the process of image stitching, the algorithm can preserve image feature points and transposed matrix of image matching, so as to calculate the new coordinates, joining together the original feature points in the image. Using this coordinate to match the feature points of another image can save the time of finding feature points in the stitching image, and finally speed up the stitching and complete the task of real-time stitching. The experiment proves that: The error obtained by the new correction method is smaller. Compared with the traditional feature point stitching algorithm, the fast stitching algorithm (FASTITCN) proposed in this paper can shorten the stitching time by about 20%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
116. SEFSD: an effective deployment algorithm for fog computing systems.
- Author
-
Chen, Huan, Chang, Wei-Yan, Chiu, Tai-Lin, Chiang, Ming-Chao, and Tsai, Chun-Wei
- Subjects
METAHEURISTIC algorithms ,COMPUTER systems ,ALGORITHMS ,DATA transmission systems ,QUALITY of service ,PROBLEM solving - Abstract
Fog computing aims to mitigate data communication delay by deploying fog nodes to provide servers in the proximity of users and offload resource-hungry tasks that would otherwise be sent to distant cloud servers. In this paper, we propose an effective fog device deployment algorithm based on a new metaheuristic algorithm–search economics–to solve the optimization problem for the deployment of fog computing systems. The term "effective" in this paper refers to that the developed algorithm can achieve better performance in terms of metrics such as lower latency and less resource usage. Compared with conventional metaheuristic algorithms, the proposed algorithm is unique in that it first divides the solution space into a set of regions to increase search diversity of the search and then allocates different computational resources to each region according to its potential. To verify the effectiveness of the proposed algorithm, we compare it with several classical fog computing deployment algorithms. The simulation results indicate that the proposed algorithm provides lower network latency and higher quality of service than the other deployment algorithms evaluated in this study. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
117. Research on Cascading Fault Location of Chemical Material Networks Based on BFS-Time-Reversal Backpropagation Algorithm.
- Author
-
Wang, Zheng, Li, Huapeng, Liu, Ruijie, Hou, Jingmin, Dong, Ran, Hu, Yiyi, Jia, Xiaoping, and Wang, Fang
- Subjects
FAULT location (Engineering) ,ELECTRIC fault location ,MAXIMUM likelihood statistics ,ALGORITHMS ,CHEMICAL models - Abstract
It is of great theoretical value and practical significance to study the location of cascading failures in chemical material networks with large and complex characteristics. Previous mainly used the method of obtaining all information of nodes, maximum likelihood estimation function, and distance centrality to locate faults. These methods need to determine the status of all nodes in the network or perform operations on the nodes in the entire network related to the observation node, the localization process is more complicated and costly, and the results are not accurate enough. This paper proposes a positioning method based on the breadth-first search strategy (BFS)-time-reversal backpropagation algorithm to study the cascading fault location of chemical material networks. Firstly, the cascade fault propagation model of the chemical material network is constructed according to the relevant theory of the complex network, and the observation node selection strategy selects the observation node. Secondly, use the BFS to search out the fault area, then make assumptions about the time delay of the propagation process, and use the time-reversal backpropagation algorithm to filter the nodes in the fault area to realize the cascade fault location of the chemical material network. Finally, change the observation node selection strategy, analyze the positioning accuracy under each strategy, and select the optimal selection strategy for positioning using the method in this paper. Case analysis shows that this method can effectively realize the cascading fault location in the chemical material network. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
118. An image encryption algorithm based on pixel bit operation and nonlinear chaotic system.
- Author
-
Wang, Xingyuan and Chen, Shengnan
- Subjects
IMAGE encryption ,NONLINEAR systems ,ALGORITHMS ,PIXELS ,CHAOS theory - Abstract
This paper proposes a new one-dimensional chaotic map—nonlinear coupled Sine-Tent-Logistic chaotic map (1DNCSTL). A series of tests on this map show that the map has the characteristics of randomness and sensitivity to initial values and is suitable for image encryption. Based on this map, the article further proposes pixel bit position scrambling and reorganization operation and dynamic nonunique diffusion operation. Pixel bit position scrambling and reorganization operation is different from the traditional scrambling operation that only changes the position of the pixel value; this operation can achieve the effect of changing the pixel position and pixel value at the same time. The dynamic nonunique diffusion operation is different from the traditional unique-formula diffusion operation. The diffusion formula is not unique, which can ensure the security of the algorithm. Simulation experiment results and various security performance analysis show that the algorithm proposed in this paper has good performance. Compared with other encryption schemes, this algorithm is more suitable for image encryption. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
119. A separable and dual data hiding algorithm based on adaptive ternary segmentation coding and ZN-shape space-filling curve.
- Author
-
Hui, Shi, Sai, Ma, Jian, Zhao, Zhiyu, Zhang, and Dan, Huang
- Subjects
REVERSIBLE data hiding (Computer science) ,MULTIMEDIA systems ,SINGULAR value decomposition ,ALGORITHMS ,COPYRIGHT ,CURVES ,WAVELET transforms ,PUBLIC key cryptography - Abstract
Considering the applications of requiring high fidelity of multimedia content, and for the purpose of copyright protection, a separable and dual data hiding algorithm is proposed. In this paper, the definitions of Ternary Segmentation (TS), Hidden Pixel Pair (HPP), ZN-shape space-filling curve, and Lucas-Arnold Scrambling (LAS) are presented for the first time. By virtue of without degrading the quality of the host image, this paper combines reversible data hiding and zero data hiding to solves the contradiction between robustness and imperceptibility. Firstly, 1-level reversible data hiding is performed using TS, HPP and Huffman compression. Then, 2-level zero data hiding is adopted based on LAS, IWT (Integer Wavelet Transform), BN-SVD(Boost Normed Singular Value Decomposition) and ZN-shape space-filling curve. Finally, separable decryption and extraction is conducted. The experiment results prove that the proposed scheme achieves a better image perceptual quality with an average PSNR of about 49 dB, and stronger robustness with an average NCC of about 0.99 under various attacks. Moreover, it achieves higher security, and the average values of information entropy, correlation coefficient, NPCR and UACI are 7.9, 0.01, 99.6854% and 33.4378%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
120. The unified image encryption algorithm based on composite chaotic system.
- Author
-
Zheng, Jiming and Zeng, Qingxia
- Subjects
IMAGE encryption ,ALGORITHMS ,IMAGING systems - Abstract
This paper proposes a fast and unified encryption and decryption algorithm based on a composite chaotic system. By combining Logistic map and Sine map, the New-Logistic-Sine map (NLS map) is obtained. NLS map generated the diffusion key matrix needed in the algorithm process, which can enhance the anti-attack ability of the encryption algorithm. Different from most image cryptography systems, the algorithm adopted in this paper has the same encryption process and decryption process, which can save half of the resources in real applications. Firstly, the Secure Hash Algorithm 256 (SHA256) value of the original image was obtained, and the initial values and control parameters of NLS map and Logistic map were calculated; Secondly, the diffusion key matrix is obtained by iterative the NLS map, and is used to perform the first diffusion of the original image; Thirdly, the permutation key sequence is obtained by iterative the Logistic map, and using the sequence to perform the permutation operation on the image after the first diffusion; Finally, the same diffusion key matrix as the first diffusion operation is used to carry out the second diffusion operation on the displaced image to obtain the final encrypted image. The simulation experiment and security analysis show that the proposed image cryptosystem possessed identical encryption process and decryption process, and the algorithm speed is improved ensure the security of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
121. A clustering-optimized segmentation algorithm and application on food quality detection.
- Author
-
Wu, QingE, Li, Penglei, Chen, Zhiwu, and Zong, Tao
- Subjects
FOOD quality ,IMAGE segmentation ,STUFFED foods ,CONVEYOR belts ,ALGORITHMS ,BELT conveyors ,DUMPLINGS ,FROZEN foods - Abstract
For solving the problem of quality detection in the production and processing of stuffed food, this paper suggests a small neighborhood clustering algorithm to segment the frozen dumpling image on the conveyor belt, which can effectively improve the qualified rate of food quality. This method builds feature vectors by obtaining the image's attribute parameters. The image is segmented by a distance function between categories using a small neighborhood clustering algorithm based on sample feature vectors to calculate the cluster centers. Moreover, this paper gives the selection of optimal segmentation points and sampling rate, calculates the optimal sampling rate, suggests a search method for optimal sampling rate, as well as a validity judgment function for segmentation. Optimized small neighborhood clustering (OSNC) algorithm uses the fast frozen dumpling image as a sample for continuous image target segmentation experiments. The experimental results show the accuracy of defect detection of OSNC algorithm is 95.9%. Compared with other existing segmentation algorithms, OSNC algorithm has stronger anti-interference ability, faster segmentation speed as well as more efficiently saves key information ability. It can effectively improve some disadvantages of other segmentation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
122. Enhancing constrained application protocol using message options for internet of things.
- Author
-
Bansal, Sharu and Kumar, Dilip
- Subjects
INTERNET of things ,TELECOMMUNICATION systems ,NETWORK performance ,TIMESTAMPS ,ALGORITHMS - Abstract
The request-response model for constrained devices and networks has been achieved via RESTful architecture of Constrained Application Protocol (CoAP) in the Internet of Things (IoT). The latency in messages is significant in constrained networks. These latencies can be managed by introducing a mechanism for updating the client/server and network status in CoAP. This mechanism would benefit in optimizing network communication. This paper proposes a mechanism to update any latency from client/server nodes amending the existing CoAP messaging model. A few options have been proposed to the CoAP message in terms of latency-state indicator, IN/OUT timestamps, and priority, which helps implement the proposed model. These options would additionally help in improving network and client/server performance. The network performance improves as the void messages in the network would reduce. The simulation of the implemented algorithm has shown a significant improvement in terms of the network's latency, message priority, and node status. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
123. Automatic detection method for tobacco beetles combining multi-scale global residual feature pyramid network and dual-path deformable attention.
- Author
-
Chen, Yuling, Li, Xiaoxia, Lv, Nianzu, He, Zhenxiang, and Wu, Bin
- Subjects
BEETLES ,LOCALIZATION (Mathematics) ,TOBACCO ,PIXELS ,PESTS ,NOISE ,ALGORITHMS - Abstract
Aiming at the problems of identifying storage pest tobacco pest beetles from images that have few object pixels and considerable image noise, and therefore suffer from lack of information and identifiable features, this paper proposes an automatic monitoring method of tobacco beetle based on Multi-scale Global residual Feature Pyramid Network and Dual-path Deformable Attention (MGrFPN-DDrGAM). Firstly, a Multi-scale Global residual Feature Pyramid Network (MGrFPN) is constructed to obtain rich high-level semantic features and more complete information on low-level features to reduce missed detection; Then, a Dual-path Deformable receptive field Guided Attention Module (DDrGAM) is designed to establish long-range channel dependence, guide the effective fusion of features and improve the localization accuracy of tobacco beetles by fitting the spatial geometric deformation features of and capturing the spatial information of feature maps with different scales to enrich the feature information in the channel and spatial. Finally, to simulate a real scene, a multi-scene tobacco beetle dataset is created. The dataset includes 28,080 images and manually labeled tobacco beetle objects. The experimental results show that under the framework of the Faster R-CNN algorithm, the detection precision and recall rate of this method can reach 91.4% and 98.4% when the intersection ratio (IoU) is 0.5. Compared with Faster R-CNN and FPN, when the intersection ratio (IoU) is 0.7, the detection precision is improved by 32.9% and 6.9%, respectively. The proposed method is superior to the current mainstream methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
124. Application of the QDST algorithm for the Schrödinger particle simulation in the infinite potential well.
- Author
-
Ostrowski, Marcin
- Subjects
POTENTIAL well ,ALGORITHMS ,QUANTUM computers ,SCHRODINGER equation - Abstract
This paper examines whether a quantum computer can efficiently simulate the time evolution of the Schrödinger particle in a one-dimensional infinite potential well. In order to solve the Schrödinger equation in the quantum register, an algorithm based on the Quantum Discrete Sine Transform (QDST) is applied. The paper compares the results obtained in this way with the results given by the previous method (based on the QFT algorithm). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
125. Offline and online task allocation algorithms for multiple UAVs in wireless sensor networks.
- Author
-
Ye, Liang, Yang, Yu, Meng, Weixiao, Wu, Xuanli, Li, Xiaoshuai, and Zhu, Rangang
- Subjects
WIRELESS sensor networks ,ONLINE algorithms ,ALGORITHMS ,DISASTER relief - Abstract
In recent years, UAV techniques are developing very fast, and UAVs are becoming more and more popular in both civilian and military fields. An important application of UAVs is rescue and disaster relief. In post-earthquake evaluation scenes where it is difficult or dangerous for human to reach, UAVs and sensors can form a wireless sensor network and collect environmental information. In such application scenarios, task allocation algorithms are important for UAVs to collect data efficiently. This paper firstly proposes an improved immune multi-agent algorithm for the offline task allocation stage. The proposed algorithm provides higher accuracy and convergence performance by improving the optimization operation. Then, this paper proposes an improved adaptive discrete cuckoo algorithm for the online task reallocation stage. By introducing adaptive step size transformation and appropriate local optimization operator, the speed of convergence is accelerated, making it suitable for real-time online task reallocation. Simulation results have proved the effectiveness of the proposed task allocation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
126. Adaptive guaranteed lower eigenvalue bounds with optimal convergence rates.
- Author
-
Carstensen, Carsten and Puttkammer, Sophie
- Subjects
GENERALIZATION ,AXIOMS ,A priori ,ARGUMENT ,ALGORITHMS - Abstract
Guaranteed lower Dirichlet eigenvalue bounds (GLB) can be computed for the m-th Laplace operator with a recently introduced extra-stabilized nonconforming Crouzeix–Raviart ( m = 1 ) or Morley ( m = 2 ) finite element eigensolver. Striking numerical evidence for the superiority of a new adaptive eigensolver motivates the convergence analysis in this paper with a proof of optimal convergence rates of the GLB towards a simple eigenvalue. The proof is based on (a generalization of) known abstract arguments entitled as the axioms of adaptivity. Beyond the known a priori convergence rates, a medius analysis is enfolded in this paper for the proof of best-approximation results. This and subordinated L 2 error estimates for locally refined triangulations appear of independent interest. The analysis of optimal convergence rates of an adaptive mesh-refining algorithm is performed in 3D and highlights a new version of discrete reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
127. Single-tag and multi-tag RFID data cleaning approach in edge computing.
- Author
-
Li, Chunlin, Jiang, Kun, Li, Xinyong, Zhang, Libin, and Luo, Youlong
- Subjects
EDGE computing ,RADIO frequency identification systems ,STATISTICAL smoothing ,MARKOV processes ,SMOOTHING (Numerical analysis) ,DATA scrubbing ,JUDGMENT (Psychology) ,TAGS (Metadata) ,ALGORITHMS - Abstract
As the performance of Radio Frequency Identification (RFID) devices is susceptible to the influence of the surrounding environment, making the original data collected by RFID devices with uncertain, of which missed data and redundant data are the main source of uncertainty data, these uncertainty data will seriously affect the quality of RFID upper layer applications. Therefore, to solve the uncertainty in RFID data, the original data must be cleaned. For the shortcomings of tag dynamics detection in the traditional data cleaning algorithm named Statistical Smoothing for Unreliable RFID data (SMURF), an adaptive sliding window-based data cleaning algorithm for RFID single-tag is proposed. The method takes into account the influence of tag speed on tag integrity judgment, and divides sliding sub-windows to accurately detect tag changes and then reasonably adjusts the sliding window size. In addition, considering that the average reading rate of tags is affected by the collision between multiple tags, an RFID multi-tag cleaning method based on twice-tag number estimation is proposed. The method accurately estimates the number of tags by the twice-tag estimation method, controls the read period by the Markov chain, and reduces the occurrence of multiple tag collisions by using an unequal time slot optimization method. Experimental results show that the proposed method in this paper can form a complete set of RFID data stream cleaning algorithms, which effectively reduces the uncertain data and improves the data accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
128. Improved adaptive-phase fuzzy high utility pattern mining algorithm based on tree-list structure for intelligent decision systems.
- Author
-
Chen, Jing, Liu, Aijun, Zhang, Hongjun, Yang, Shengyi, Zheng, Hui, Zhou, Ning, and Li, Peng
- Subjects
ARTIFICIAL intelligence ,SMART structures ,ALGORITHMS ,DATA mining ,BIG data - Abstract
With the rapid development of AI and big data mining technologies, computerized medical decision-making has become increasingly prominent. The aim of high-utility pattern mining (HUPM) is to discover meaningful patterns in medical databases that contribute to maximizing the utility from the perspective of diagnosis. However, HUPM pays less attention to the interpretability and explainability of these patterns in medical decision-making scenarios. This paper proposes a novel algorithm called the Improved fuzzy high-utility pattern mining (IF-HUPM) to address this problem. First, the paper applies a fuzzy preprocessing method to divide the fuzzy intervals of a medical quantitative data set, which enhances the fuzziness and interpretability of the data. Next, in the process of IF-HUPM, both fuzzy tree and list structures are employed to calculate fuzzy high-utility values. By combining the characteristics of the one-stage and two-stage algorithms of HUPM, an adaptive-phase Fuzzy HUPM hybrid frame is proposed. The experimental results demonstrate that the proposed IF-HUPM algorithm enhances both accuracy and efficiency and the mining process requires less time and space on average. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
129. A complex structure-preserving algorithm for computing the singular value decomposition of a quaternion matrix and its applications.
- Author
-
Zhang, Dong, Jiang, Tongsong, Jiang, Chuan, and Wang, Gang
- Subjects
MATRIX decomposition ,APPLIED sciences ,SINGULAR value decomposition ,ALGORITHMS ,QUATERNION functions ,NUMERICAL calculations ,QUATERNIONS - Abstract
Singular value decomposition plays a prominent role in the theoretical study and numerical calculation of a quaternion matrix in applied sciences. This paper, by means of a complex representation of a quaternion matrix, studies the algorithm for the singular value decomposition of a quaternion matrix, and derives a complex structure-preserving algorithm for the singular value decomposition of a quaternion matrix. This paper also gives two examples to demonstrate the effectiveness of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
130. Task recommendation based on user preferences and user-task matching in mobile crowdsensing.
- Author
-
Li, Xiaolin, Zhang, Lichen, Zhou, Meng, and Bian, Kexin
- Subjects
CROWDSENSING ,ALGORITHMS - Abstract
In mobile crowdsensing, the sensing platform recruits users to complete large-scale sensing tasks cooperatively. In order to guarantee the quality of sensing tasks, the platform needs to recommend suitable tasks to users. Existing task recommendation methods typically focus on unilateral factors, such as user preferences or task quality, leading to low platform utility and task acceptance rate respectively. To solve the above issue, this paper proposes a task recommendation method which takes both user preferences and user-task matching into consideration. Firstly, we apply the Deep Interest Network (DIN) in the context of mobile crowdsensing to recommend tasks according to user preferences. Secondly, the concept of user-task matching is introduced, in which both the task difficulty and the user reliability are taken into account. Finally, we propose task recommendation algorithms and conduct extensive experiments on a real dataset. The experimental results show that the proposed method can not only improve the utility of the platform significantly, but also improve the recommendation accuracy slightly under longer recommendation list. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
131. Discovering frequent parallel episodes in complex event sequences by counting distinct occurrences.
- Author
-
Ouarem, Oualid, Nouioua, Farid, and Fournier-Viger, Philippe
- Subjects
COUNTING ,ALGORITHMS ,DECISION making ,DEFINITIONS - Abstract
Event sequences are common types of data. Several episode mining algorithms have been developed to find episodes (subsequences of events) that appear frequently in an event sequence, with the aim of discovering useful knowledge for decision-making and predictions. However, most of these algorithms can only process simple event sequences (where, at most, one event occurs at each timestamp). In contrast, in many real-life applications, multiple events may occur at the same timestamp, resulting in complex event sequences. Moreover, numerous episode mining algorithms overestimate the frequency of episodes by counting the same events multiple times. As a solution, some algorithms have been designed to count only non-overlapping occurrences. Yet, it can be argued that this definition is too strict and discards many important events. To address these limitations, this paper presents an algorithm named EMDO (Episode Mining under Distinct Occurrences (EMDO) to find frequent episodes in a complex sequence by counting distinct occurrences. The proposed concept of distinct occurrences ensures that each event is not counted more than once but allows distinct occurrences to overlap. A second algorithm, called EMDO-P, is also presented in this paper to derive strong episode rules in event sequences from episodes found by EMDO. To the best of our knowledge, this is the first study on mining frequent episodes using a frequency definition based on distinct occurrences. The experimental results confirm that the proposed algorithms are efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
132. Quantum Algorithm for Searching of Two Sets Intersection.
- Author
-
Khadiev, K. and Krendeleva, E.
- Subjects
QUANTUM computing ,ALGORITHMS ,COMPUTATIONAL complexity - Abstract
In the paper we investigate Two Sets Intersection problem. Assume that we have two sets that are subsets of n objects. Sets are presented by two predicates that show which of n objects belong to these sets. We present a quantum algorithm that finds an element from the two sets intersection. It is a modification of the well-known Grover's search algorithm that uses two Oracles with access to the predicates. The algorithm is faster than the naive application of Grover's search. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
133. MapReduce-based distributed tensor clustering algorithm.
- Author
-
Zhang, Hongjun, Li, Peng, Meng, Fanshuo, Fan, Weibei, and Xue, Zhuangzhuang
- Subjects
K-means clustering ,CLUSTER analysis (Statistics) ,COMPUTER science ,ALGORITHMS ,PARTITION functions ,PARALLEL processing ,PARALLEL programming - Abstract
Cluster analysis is one of the most fundamental methods in data mining, and it has been widely used in economics, social sciences and computer science. However, with the rapid development of Internet technology, the volume of data required for various web applications has grown rapidly, making the traditional clustering analysis methods face technical challenges. How to obtain useful information in a large amount of data quickly and efficiently is an urgent problem in many industrial fields. With the continuous development of cloud computing technology, large amounts of data can be performed quickly and efficiently. Hadoop is an open source distributed cloud computing platform with HDFS (Digital File System) and MapReduce as its core. HDFS provides massive data storage, while MapReduce uses the MapReduce programming model to achieve parallel processing. Compared with the traditional parallel programming model, it contains basic functions such as data partitioning, task scheduling, and parallel processing, making it possible for users to develop distributed applications on their own without understanding the basics of distributed basics, thus facilitating the design of parallel programs. K-means algorithm is a typical clustering analysis method, which is widely used in industry, but the number of iterations will increase significantly due to the growth of data volume, thus reducing the efficiency of computation. In order to better apply to the cluster analysis of large-scale data, this paper firstly implements a parallelization algorithm based on MapReduce on Hadoop platform using the basic idea of MapReduce and improves the K-means algorithm for the problems of blindness and easy to fall into local optimum when selecting randomly in clusters. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
134. Product backlog optimization technique in agile software development using clustering algorithm.
- Author
-
Sharma, Sarika and Kumar, Deepak
- Subjects
AGILE software development ,MATHEMATICAL optimization ,REQUIREMENTS engineering ,ALGORITHMS - Abstract
Context: The recent research trend has highlighted that multiple stakeholders are involved during requirement gathering in agile software development. Hence, leading to an increased number of duplicate user stories in agile product backlog during requirement gathering. Objective: The objective of this paper is to evaluate the existing techniques employed in identifying and eliminating the duplicate user stories from agile product backlog and to overcome the existing gaps with the help of a newly proposed clustering algorithm. Method: An agile user story is expressed as a function of input and output parameters. That said multiple user stories having similar set of input parameters are most likely to be duplicate causing a redundancy. The newly proposed algorithm is used for clustering user stories having similar set of input parameters through various iterations and then removing the identified duplicate user stories from agile product backlog. This paper also introduces the concept of mass clustering which means clustering a number of user stories in single run. Results: Experimental results prove the proposed model is capable of handling small and large releases ranging between 100 to 1000 user stories with similar efficiency. The proposed clustering algorithm outperformed the clustering algorithms and resulted in 37% decrease in agile product backlog by eliminating duplicate user stories causing redundancy. The experimental results are obtained from the logs of the MATLAB tool. However, the provided algorithm is generic in nature and can be implemented using R, Python or SAS programming tools. The provided algorithms employs proven matrix operations. Conclusion: The proposed clustering algorithm overcomes the limitation of existing user story management methods and clearly out performs when compared with other clustering algorithms. Finally, this paper gives recommendations about the usage of the provided clustering algorithm during agile release planning for eliminating duplicate user stories from agile product backlog. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
135. Upward Book Embeddability of st-Graphs: Complexity and Algorithms.
- Author
-
Binucci, Carla, Da Lozzo, Giordano, Di Giacomo, Emilio, Didimo, Walter, Mchedlidze, Tamara, and Patrignani, Maurizio
- Subjects
DIRECTED acyclic graphs ,GRAPH theory ,DIRECTED graphs ,ALGORITHMS ,NP-complete problems - Abstract
A k-page upward book embedding (kUBE) of a directed acyclic graph G is a book embeddings of G on k pages with the additional requirement that the vertices appear in a topological ordering along the spine of the book. The kUBE Testing problem, which asks whether a graph admits a kUBE, was introduced in 1999 by Heath, Pemmaraju, and Trenk (SIAM J Comput 28(4), 1999). In a companion paper, Heath and Pemmaraju (SIAM J Comput 28(5), 1999) proved that the problem is linear-time solvable for k = 1 and NP-complete for k = 6 . Closing this gap has been a central question in algorithmic graph theory since then. In this paper, we make a major contribution towards a definitive answer to the above question by showing that kUBE Testing is NP-complete for k ≥ 3 , even for st-graphs, i.e., acyclic directed graphs with a single source and a single sink. Indeed, our result, together with a recent work of Bekos et al. (Theor Comput Sci 946, 2023) that proves the NP-completeness of 2UBE for planar st-graphs, closes the question about the complexity of the kUBE problem for any k. Motivated by this hardness result, we then focus on the 2UBE Testing for planar st-graphs. On the algorithmic side, we present an O (f (β) · n + n 3) -time algorithm for 2UBE Testing, where β is the branchwidth of the input graph and f is a singly-exponential function on β . Since the treewidth and the branchwidth of a graph are within a constant factor from each other, this result immediately yields an FPT algorithm for st-graphs of bounded treewidth. Furthermore, we describe an O(n)-time algorithm to test whether a plane st-graph whose faces have a special structure admits a 2UBE that additionally preserves the plane embedding of the input st-graph. On the combinatorial side, we present two notable families of plane st-graphs that always admit an embedding-preserving 2 UBE. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
136. Multifactorial evolutionary algorithm with adaptive transfer strategy based on decision tree.
- Author
-
Li, Wei, Gao, Xinyu, and Wang, Lei
- Subjects
OPTIMIZATION algorithms ,EVOLUTIONARY algorithms ,BENCHMARK problems (Computer science) ,DECISION trees ,KNOWLEDGE transfer ,ALGORITHMS - Abstract
Multifactorial optimization (MFO) is a kind of optimization problem that has attracted considerable attention in recent years. The multifactorial evolutionary algorithm utilizes the implicit genetic transfer mechanism characterized by knowledge transfer to conduct evolutionary multitasking simultaneously. Therefore, the effectiveness of knowledge transfer significantly affects the performance of the algorithm. To achieve positive knowledge transfer, this paper proposed an evolutionary multitasking optimization algorithm with adaptive transfer strategy based on the decision tree (EMT-ADT). To evaluate the useful knowledge contained in the transferred individuals, this paper defines an evaluation indicator to quantify the transfer ability of each individual. Furthermore, a decision tree is constructed to predict the transfer ability of transferred individuals. Based on the prediction results, promising positive-transferred individuals are selected to transfer knowledge, which can effectively improve the performance of the algorithm. Finally, CEC2017 MFO benchmark problems, WCCI20-MTSO and WCCI20-MaTSO benchmark problems are used to verify the performance of the proposed algorithm EMT-ADT. Experimental results demonstrate the competiveness of EMT-ADT compared with some state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
137. A novel memristor-based chaotic image encryption algorithm with Hash process and S-box.
- Author
-
Shi, Hang, Yan, Dengwei, Wang, Lidan, and Duan, Shukai
- Subjects
IMAGE encryption ,ALGORITHMS ,ENTROPY (Information theory) - Abstract
Considering that the way of disseminating information by means of images has become more and more accepted by the public in recent years, and the encryption algorithm for digital images has defects such as backward operation and insufficient security. Therefore, a novel chaos-based encryption algorithm is proposed in this paper. Firstly, this paper proposes a 4-D chaotic system based on a flux-controlled memristor model. An integrated analysis is applied to the system to illustrate its chaotic characteristics. Then, a Hash process is used to disturb the initial values of the chaotic system, which improves the plaintext sensitivity of the algorithm. After that, S-box substitution and bit-XOR operation are also introduced to change the value of pixels, which further scramble the distribution of pixels. Finally, this paper demonstrates the effectiveness of the proposed encryption algorithm through the information entropy and correlation, etc. Meanwhile, the progressiveness of the algorithm is compared with other algorithms in stochasticity and security through comparative experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
138. Autofocus algorithm using optimized Laplace evaluation function and enhanced mountain climbing search algorithm.
- Author
-
Jia, Dongyao, Zhang, Chuanwang, Wu, Nengkai, Zhou, Jialin, and Guo, Zhigang
- Subjects
SEARCH algorithms ,MOUNTAINEERING ,LAPLACIAN operator ,ALGORITHMS ,IMAGING systems ,IMAGE recognition (Computer vision) - Abstract
In the field of digital imaging systems, autofocus plays increasingly a vital role as a key technology. Autofocus poses a great challenge due to nosiy background and slow focusing speed. This paper presents a new focusing algorithm based on improved Laplacian operator and mountain-climb search algorithm. The clear image after focusing is more different in gray scale than the image without focusing, an image definition evaluation function combining local variance and Laplacian operator is proposed. Learning from the advantages of two-stage recognition in deep learning image recognition, an two-stage search algorithm based on mountain-climb search is designed to better fit the focusing curve near the extreme value of focusing evaluation function, improved mountain-climb search algorithm is divided into rough focusing and fine focusing. The method of rough focusing is used to determine a small focus area, and then fine focusing based on function approximation can greatly improve the efficiency of focus position.The experimental results indicate that this algorithm in this paper is superior to the traditional algorithm in time and accuracy, and the time of the autofocus is reduced by 76%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
139. Editorial Foreword To Three Ukrainian/ Russian Papers in Decision Science.
- Author
-
Munier, Bertrand
- Subjects
SCIENCE ,DECISION making ,ALGORITHMS ,TECHNOLOGY - Abstract
Talks about two Ukrainian papers and a Russian paper on decision science. Argument of L. I. Krechetov about using the algorithms which compare the constraint relaxation with the change in the welfare function; Technology reliability issues discussed by A. N. Golodnikov, P. S. Knopov and V. A. Pepelyaev.
- Published
- 2004
- Full Text
- View/download PDF
140. A Note on the Paper 'Regularization Proximal Point Algorithm for Common Fixed Points of Nonexpansive Mappings in Banach Spaces'.
- Author
-
Hang, Nguyen and Tuyen, Truong
- Subjects
- *
FIXED point theory , *NONEXPANSIVE mappings , *BANACH spaces , *MATHEMATICAL regularization , *STOCHASTIC convergence , *ALGORITHMS - Abstract
In this note, a small gap is corrected in the assumption of main theorem of T.M. Tuyen (Theorem 3.1, Regularization proximal point algorithm for common fixed points of nonexpansive mappings in Banach spaces, J. Optim. Theory Appl., 152:351-365, ). We give another assumption, which allows us to obtain the strong convergence of regularization algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
141. Paper-based dosing algorithms for maintenance of warfarin anticoagulation.
- Author
-
Wilson, Sarah, Costantini, Lorrie, and Crowther, Mark
- Abstract
We examined the quality of anticoagulation produced by two paper-based warfarin dosing algorithms in a randomized clinical trial of warfarin therapy. Fifty-eight patients were randomized to receive warfarin at a target international normalized ratio (INR) range of 2.1–3.0 and were followed for an average of 2.7 years. As a proportion of total patient-time, the percentage of time spent above, within, and below the therapeutic range was 11%, 71%, and 19% respectively. Fifty-six patients were randomized to receive warfarin at a higher target INR range (3.1–4.0) and had INRs within the therapeutic range for 40% of total patient time. We conclude that the performance, minimal cost, and ease-of-use of these algorithms make them well-suited for patient management within primary-care and research settings. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
142. Exploring heterogeneous information networks and random walk with restart for academic search.
- Author
-
Chiang, Meng-Fen, Liou, Jiun-Jiue, Wang, Jen-Liang, Peng, Wen-Chih, and Shan, Man-Kwan
- Subjects
INFORMATION networks ,RANDOM walks ,QUERYING (Computer science) ,INFORMATION storage & retrieval systems ,GRAPH theory ,ELECTRONIC information resource searching ,ALGORITHMS - Abstract
In this paper, we explore heterogenous information networks in which each vertex represents one entity and the edges reflect linkage relationships. Heterogenous information networks contain vertices of several entity types, such as papers, authors and terms, and hence can fully reflect multiple linkage relationships among different entities. Such a heterogeneous information network is similar to a mixed media graph (MMG). By representing a bibliographic dataset as an MMG, the performance obtained when searching relevant entities (e.g., papers) can be improved. Furthermore, our academic search enables multiple-entity search, where a variety of entity search results are provided, such as relevant papers, authors and conferences, via a one-time query. Explicitly, given a bibliographic dataset, we propose a Global-MMG, in which a global heterogeneous information network is built. When a user submits a query keyword, we perform a random walk with restart (RWR) to retrieve papers or other types of entity objects. To reduce the query response time, algorithm Net-MMG (standing for NetClus-based MMG) is developed. Algorithm Net-MMG first divides a heterogeneous information network into a collection of sub-networks. Afterward, the Net-MMG performs a RWR on a set of selected relevant sub-networks. We implemented our academic search and conducted extensive experiments using the ACM Digital Library. The experimental results show that by exploring heterogeneous information networks and RWR, both the Global-MMG and Net-MMG achieve better search quality compared with existing academic search services. In addition, the Net-MMG has a shorter query response time while still guaranteeing good quality in search results. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
143. A new implicit blending technique for volumetric modelling.
- Author
-
Bogdan Lipu and Nikola Guid
- Subjects
PAPER ,ALGORITHMS ,ALGEBRA ,FOUNDATIONS of arithmetic - Abstract
Abstract Current implicit blending techniques are mostly designed for use in surface modelling, where only boundaries of the object defined by the implicit primitives are important. In contrast, in volumetric implicit modelling the interior of the object is also significant, which requires different and more suitable techniques for combining implicit primitives. In this paper, we first discuss irregularities that occur using the current techniques. Then, a new technique for blending implicit primitives, especially appropriate in volumetric modelling (e.g., cloud modelling), is introduced. It overcomes these abnormalities and gives us better results than current techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2005
144. Trend analysis of global usage of digital soil mapping models in the prediction of potentially toxic elements in soil/sediments: a bibliometric review.
- Author
-
Agyeman, Prince Chapman, Ahado, Samuel Kudjo, Borůvka, Luboš, Biney, James Kobina Mensah, Sarkodie, Vincent Yaw Oppong, Kebonye, Ndiye M., and Kingsley, John
- Subjects
DIGITAL soil mapping ,TREND analysis ,PREDICTION models ,GLOBAL analysis (Mathematics) ,SOIL pollution - Abstract
The rising and continuous pollution of the soil from anthropogenic activities is of great concern. Owing to this concern, the advent of digital soil mapping (DSM) has been a tool that soil scientists use in this era to predict the potentially toxic element (PTE) content in the soil. The purpose of this paper was to conduct a review of articles, summarize and analyse the spatial prediction of potentially toxic elements, determine and compare the models' usage as well as their performance over time. Through Scopus, the Web of Science and Google Scholar, we collected papers between the year 2001 and the first quarter of 2019, which were tailored towards the spatial PTE prediction using DSM approaches. The results indicated that soil pollution emanates from diverse sources. However, it provided reasons why the authors investigate a piece of land or area, highlighting the uncertainties in mapping, number of publications per journal and continental efforts to research as well as published on trending issues regarding DSM. This paper reveals the complementary role machine learning algorithms and the geostatistical models play in DSM. Nevertheless, geostatistical approaches remain the most preferred model compared to machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
145. An automatic algorithm for software vulnerability classification based on CNN and GRU.
- Author
-
Wang, Qian, Li, Yazhou, Wang, Yan, and Ren, Jiadong
- Subjects
RECURRENT neural networks ,CONVOLUTIONAL neural networks ,SEMANTICS ,AUTOMATIC classification ,ALGORITHMS - Abstract
In order to improve the management efficiency of software vulnerability classification, reduce the risk of system being attacked and destroyed, and save the cost for vulnerability repair, this paper proposes an automatic algorithm for Software Vulnerability Classification based on convolutional neural network (CNN) and gate recurrent unit neural network (GRU), called SVC-CG. It has conducted a fusion between the models of CNN and GRU according to their advantages (CNN is good at extracting local vector features of vulnerability text and GRU is good at extracting global features related to the context of vulnerability text). The merger of the features extracted by the complementary models can represent the semantic and grammatical information more accurately. Firstly, the Skip-gram language model based on Word2Vec is used to train and generate the word vector, and the words in each vulnerability text are mapped into the space with limited dimensions to represent the semantic information. Then the CNN is used to extract the local features of the text vector, and the GRU is used to extract the global features related to the text context. We combine two complementary models to construct a SVC-CG neural network algorithm, which can represent semantic and grammatical information more accurately to realize automatic classification of vulnerabilities. The experiment uses the vulnerability data from the national vulnerability database (NVD) to train and evaluate the SVC-CG algorithm. Through experimental comparison and analysis, the SVC-CG algorithm proposed in this paper has a good performance on Macro recall rate, Macro precision rate and Macro F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
146. A novel multi-objective optimization of 3D printing adaptive layering algorithm based on improved NSGA-II and fuzzy set theory.
- Author
-
Wang, Xiaoqi and Cao, Jianfu
- Subjects
THREE-dimensional printing ,SET theory ,PARETO optimum ,ALGORITHMS ,ERROR rates ,FUZZY sets ,EVOLUTIONARY algorithms - Abstract
Uniform equal thickness layering is widely used in 3D printing, which cannot take into account the printing quality and printing efficiency. In this paper, a new adaptive layering algorithm based on multi-objective optimization is proposed for this problem. The algorithm comprehensively considers the surface features of the model, the slope and curvature of the contour, and establishes a multi-objective optimization model with print quality, print time, and feature constraints. And the Pareto optimal solution set of multi-objective optimization is solved by the improved non-dominated sorting genetic algorithm-II (NSGA-II), and the Pareto optimal solution that meets different printing requirements is selected by the Fuzzy-based weighted membership ranking method. Through comparative experiments, the method proposed in this paper reduces the volume error rate by 40.9% and the printing time by 33.3% compared with uniform layering, which can effectively improve the printing quality and printing efficiency. In addition, compared with the existing adaptive layering algorithms, it is also an algorithm with good comprehensive performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
147. Learning by Doing: Integrating Shape Grammar as a Visual Coding Tool in Architectural Curricula.
- Author
-
El-Mahdy, Deena
- Subjects
GRAMMAR ,ARCHITECTURE students ,CURRICULUM ,TEACHING methods ,ARCHITECTURAL studios - Abstract
Computational design and shape grammars hold a growing appeal in the architectural curricula. This paper aims to assess shape grammars as a visual teaching method that integrates manual exploration based on "learning by doing" in early education curricula without digital software. As a primary outcome of a visual design course at the British university in Egypt, four self-structured pavilions are fabricated by first-year architecture students. Experimentation occurs through a process of visual computing which develops a deeper understanding of material qualities. A comparison of design parameters is conducted through hands-on experiments that include design principles, unit transformations, connection types, assembly process, and functional aspects. The paper examines the skills that students acquired during the course. This study concludes that by applying shape grammars in design studios, students are adequately prepared at the foundational level for their transition towards learning computational design. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
148. Journal of Global Optimization Best Paper Award for a paper published in 2013.
- Author
-
Butenko, Sergiy
- Subjects
PERIODICAL articles ,ALGORITHMS ,AWARDS - Abstract
The article announces the Journal of Global Optimization (JOGO) Best Paper Award to the paper "GloMIQO: Global mixed-integer quadratic optimizer" published in 2013.
- Published
- 2014
- Full Text
- View/download PDF
149. A message passing strategy for array redistributions in a torus network.
- Author
-
Souravlas, Stavros and Roumeliotis, Manos
- Subjects
BROADCASTING industry ,PARALLEL computers ,SCALABILITY ,SYSTEMS design ,MATHEMATICAL series ,ALGORITHMS ,CORPORATE reorganizations ,MEMORY ,PAPER - Abstract
The array redistribution problem occurs in many important applications in parallel computing. In this paper, we consider this problem in a torus network. Tori are preferred to other multidimensional networks (like hypercubes) due to their better scalability (IEE Trans. Parallel Distrib. Syst. 50(10), 1201–1218, []). We present a message combining approach that splits any array redistribution problem in a series of broadcasts where all sources send messages of the same size, thus a balanced traffic load is achieved. Unlike existing array redistribution algorithms, the scheme introduced in this work eliminates the need for data reorganization in the memory of the source and target processors. Moreover, the processing of the scheduled broadcasts is pipelined, thus the total cost of redistribution is reduced. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
150. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition.
- Author
-
Gwo Giun Lee, Ming-Jiun Wang, Hsin-Te Li, and He-Yuan Lin
- Subjects
ALGORITHMS ,MOTION ,TEXTURES ,PAPER ,INTERPOLATION ,VIDEO recording ,EXPERIMENTS - Abstract
A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.