423 results
Search Results
2. Efficient Parameter Server Placement for Distributed Deep Learning in Edge Computing.
- Author
-
Wu, Yalan, Yan, Jiaquan, Chen, Long, Wu, Jigang, and Li, Yidong
- Subjects
APPROXIMATION algorithms ,DEEP learning ,EDGE computing ,NP-hard problems ,PROBLEM solving ,ALGORITHMS - Abstract
Parameter servers (PSs) placement is one of the most important factors for global model training on distributed deep learning. This paper formulates a novel problem for placement strategy of PSs in the dynamic available storage capacity, with the objective of minimizing the training time of the distributed deep learning under the constraints of storage capacity and the number of local PSs. Then, we provide the proof for the NP-hardness of the proposed problem. The whole training epochs are divided into two parts, i.e. the first epoch and the other epochs. For the first epoch, an approximation algorithm and a rounding algorithm are proposed in this paper, to solve the proposed problem. For the other epochs, an adjustment algorithm is proposed, by continuously adjusting the decisions for placement strategy of PSs to decrease the training time of the global model. Simulation results show that the proposed approximation algorithm and rounding algorithm perform better than existing works for all cases, in terms of the training time of global model. Meanwhile, the training time of global model for the proposed approximation algorithm is very close to that for optimal solution generated by the brute-force approach for all cases. Besides, the integrated algorithm outperforms the existing works when the available storage capacity varies during the training. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. EmailDetective: An Email Authorship Identification And Verification Model.
- Author
-
Fang, Yong, Yang, Yue, and Huang, Cheng
- Subjects
ATTRIBUTION of authorship ,AUTHORSHIP ,ANONYMOUS authors ,EMAIL ,ALGORITHMS ,IDENTIFICATION - Abstract
Emails are often used to illegal cybercrime today, so it is important to verify the identity of the email author. This paper proposes a general model for solving the problem of anonymous email author attribution, which can be used in email authorship identification and email authorship verification. The first situation is to find the author of an anonymous email among the many suspected targets. Another situation is to verify if an email was written by the sender. This paper extracts features from the email header and email body and analyzes the writing style and other behaviors of email authors. The behaviors of email authors are extracted through a statistical algorithm from email headers. Moreover, the author's writing style in the email body is extracted by a sequence-to-sequence bidirectional long short-term memory (BiLSTM) algorithm. This model combines multiple factors to solve the problem of anonymous email author attribution. The experiments proved that the accuracy and other indicators of proposed model are better than other methods. In email authorship verification experiment, our average accuracy, average recall and average F1-score reached 89.9%. In email authorship identification experiment, our model's accuracy rate is 98.9% for 10 authors, 92.9% for 25 authors and 89.5% for 50 authors. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
4. Efficient Algorithms For Storage Load Balancing Of Outsourced Data In Blockchain Network.
- Author
-
Liu, Tonglai, Wu, Jigang, Li, Jiaxing, Li, Jingyi, and Zhang, Zikai
- Subjects
BLOCKCHAINS ,TABU search algorithm ,CRYPTOCURRENCIES ,HEURISTIC algorithms ,ALGORITHMS ,GENETIC algorithms ,STORAGE - Abstract
Decentralized storage of data is one of the typical applications in the blockchain network. However, most of the existing works neglected the storage balancing problem in the blockchain network, which has an immediate impact on the availability and stability of the network. Therefore, this paper proposes a storage balancing problem for non-local data storage in the blockchain network and proves that the problem is non-deterministic polynomial (NP)-hard. The criterion of the storage balance is established by a balanced coefficient in the proposed scheme. A heuristic matching algorithm (HMA), a genetic algorithm (GA) and a tabu search algorithm (TSA) are customized to solve the problem of imbalanced storage formalized in this paper. Compared with our previous algorithm fast matching algorithm (FMA), experimental results demonstrate that HMA achieves better performance in terms of accuracy, computation overhead and storage overhead. Specifically, the computation overhead of HMA is lower than that of FMA by 84.45% on average, whereas the storage overhead of HMA is lower than that of FMA by 32.26% on average. By using the initial solution of HMA, TSA achieves the highest accuracy among GA, TSA and moth-flame optimization (MFO). Meanwhile, by using the initial solution of FMA, TSA achieves the highest accuracy among GA, TSA and MFO. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. A Self-Stabilizing Distributed Algorithm for the Generalized Dominating Set Problem With Safe Convergence.
- Author
-
Kobayashi, Hisaki, Sudo, Yuichi, Kakugawa, Hirotsugu, and Masuzawa, Toshimitsu
- Subjects
DOMINATING set ,DISTRIBUTED algorithms ,ALGORITHMS - Abstract
A self-stabilizing distributed algorithm is guaranteed eventually to reach and stay at a legitimate configuration regardless of the initial configuration of a distributed system. In this paper, we propose the generalized dominating set problem, which is a generalization of the dominating set and |$k$| -redundant dominating set problems. In the generalized dominating set we propose in this paper, each node |$P_{i}$| is given its set of domination wish sets, and a generalized dominating set is a set of nodes such that each node is contained in the set or has a wish set in which all its members are in the set. We propose a self-stabilizing distributed algorithm for finding a minimal generalized dominating set in an arbitrary network under the unfair distributed daemon. The proposed algorithm converges in |$O(n^{3}m)$| steps and |$O(n)$| rounds, where |$n$| (resp. |$m$|) is the number of nodes (resp. edges). Furthermore, it has the safe convergence property with safe convergence time in |$O(1)$| rounds. The space complexity of the proposed algorithm is |$O(\Delta \log n)$| bits per node, where |$\Delta $| is the maximum degree of nodes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Smart Multimedia Compressor—Intelligent Algorithms for Text and Image Compression.
- Author
-
Oswald, C and Sivaselvan, B
- Subjects
IMAGE compression ,DATA compression ,ALGORITHMS ,DATA mining ,GRAPHICAL user interfaces ,COMPRESSORS - Abstract
A number of text compression algorithms have been proposed in the past decades, which have been very effective and usually operate on conventional character/word-based approaches. A novel data compression perspective of data mining is explored in this research and the paper focuses on novel frequent sequence/pattern mining approach to text compression. This work attempts to make use of longer-range correlations between words in languages for achieving better text compression. We propose a novel and efficient method by making the compression of any word-level text in a universal manner for corpora across domains referred as Universal Huffman Tree-based encoding. The major contribution of this work is in terms of avoidance of code table communication to the decoder. Simulation results over benchmark datasets indicate that Universal Huffman encoding employing frequent sequence mining (achieves [20%, 89%] improvement in compression in reduced time. The paper also contributes a usable interface for Data compression that employs the proposed frequent sequent mining-based data compression algorithm. The interface supports features such as feedback, consistency, usability, navigation, visual appeal, performance and accessibility on par with existing compression softwares. The work results in an intelligent data compression software employing knowledge engineering perspective. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Algorithm to Construct Completely Independent Spanning Trees in Line Graphs.
- Author
-
Wang, Yifeng, Cheng, Baolei, Fan, Jianxi, Qian, Yu, and Jiang, Ruofan
- Subjects
SPANNING trees ,TREE graphs ,TIMBERLINE ,HYPERCUBES ,MOLECULAR connectivity index ,ALGORITHMS - Abstract
In the past few years, much importance and attention have been attached to completely independent spanning trees (CISTs). Many results, such as edge-disjoint Hamilton cycles, traceability, number of spanning trees, structural properties, topological indices, etc. have been obtained on line graphs, and researchers have applied the line graphs of some interconnection networks such as generalized hypercubes, augmented cubes, crossed cubes, etc. into data center networks, such as SWCube, AQLCube, BCDC, etc. At the meanwhile, few results of CISTs are reported on the line graphs. In this paper, we establish the relation of edge-disjoint spanning trees in an interconnection network |$G$| ' with its line graph |$G$| by proposing a general algorithm for the first time. By this method, more CISTs can be obtained comparing with results in the literature. Then, the decrease of diameter is discussed and simulation experiments are shown on the line graphs of hypercubes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Improved Collision Detection Of MD5 Using Sufficient Condition Combination.
- Author
-
Shen, Yanzhao, Wu, Ting, Wang, Gaoli, Dong, Xinfeng, and Qian, Haifeng
- Subjects
ALGORITHMS - Abstract
Counter-cryptanalysis uses cryptanalytic techniques to detect cryptanalytic attacks. It was introduced by Stevens with a collision detection algorithm that detects whether a message is one of a colliding message pair constructed using a collision attack. Later, Stevens and Shumow improved the collision detection against SHA-1 by using unavoidable conditions. However, there are no results improving collision detection against MD5 due to its weak diffusion properties. In this paper, an improved collision detection algorithm against MD5 is proposed by using the 14-bit sufficient condition combinations. This leads to the dividing the 223 classes into four sets. Each element, belonging to the first two sets, holds the same sufficient condition combination. Our new algorithm can classify 126 classes efficiently. The runtime is 28.6% of the previous collision detection method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Human Activity Recognition Based On Video Summarization And Deep Convolutional Neural Network.
- Author
-
Kushwaha, Arati, Khare, Manish, Bommisetty, Reddy Mounika, and Khare, Ashish
- Subjects
- *
HUMAN activity recognition , *DEEP learning , *ARTIFICIAL neural networks , *COMPUTER software reusability , *ALGORITHMS - Abstract
In this technological era, human activity recognition (HAR) plays a significant role in several applications like surveillance, health services, Internet of Things, etc. Recent advancements in deep learning and video summarization have motivated us to integrate these techniques for HAR. This paper introduces a computationally efficient HAR technique based on a deep learning framework, which works well in realistic and multi-view environments. Deep convolutional neural networks (DCNNs) normally suffer from different constraints, including data size dependencies, computational complexity, overfitting, training challenges and vanishing gradients. Additionally, with the use of advanced mobile vision devices, the demand for computationally efficient HAR algorithms with the requirement of limited computational resources is high. To address these issues, we used integration of DCNN with video summarization using keyframes. The proposed technique offers a solution that enhances performance with efficient resource utilization. For this, first, we designed a lightweight and computationally efficient deep learning architecture based on the concept of identity skip connections (features reusability), which preserves the gradient loss attenuation and can handle the enormous complexity of activity classes. Subsequently, we employed an efficient keyframe extraction technique to minimize redundancy and succinctly encapsulate the entire video content in a lesser number of frames. To evaluate the efficacy of the proposed method, we performed the experimentation on several publicly available datasets. The performance of the proposed method is measured in terms of evaluation parameters Precision, Recall, F-Measure and Classification Accuracy. The experimental results demonstrated the superiority of the presented algorithm over other existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Privacy-Preserving Breast Cancer Prediction Based on Logistic Regression.
- Author
-
Chen, Shuangquan, Li, Jinguo, Zhang, Kai, Di, Aoran, and Lu, Mengli
- Subjects
- *
MACHINE learning , *ALGORITHMS , *BREAST cancer , *LOGISTIC regression analysis - Abstract
With the increasing strain on today's healthcare resources, there is a growing demand for pre-diagnosis testing. In response, researchers have suggested diverse machine learning models for disease prediction, among which logistic regression stands out as one of the most effective models. Its objective is to enhance the accuracy and efficiency of pre-diagnosis testing, thereby alleviating the burden on healthcare resources. However, when multiple medical institutions collaborate to train models, the untrusted cloud server may pose a risk of private data leakage, enabling participants to steal data from one another. Existing privacy-preserving methods often suffer from drawbacks such as high communication costs, long training times and lack of security proofs. Therefore, it is imperative to jointly train an excellent model collaboratively and uphold data privacy. In this paper, we develop a highly optimized two-party logistic regression algorithm based on CKKS scheme. The algorithm optimizes ciphertext operations by employing ciphertext segmentation and minimizing the multiplication depth, resulting in time savings. Furthermore, it utilizes least squares to approximate sigmoid functions within specific intervals that cannot be handled by homomorphic encryption. Finally, the proposed algorithm is evaluated on a breast cancer dataset, and simulation experiments demonstrate that the model's prediction accuracy, after machine learning training, exceeds 96% for two-sided encrypted data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Hardware Addition Over Finite Fields Based On Booth–Karatsuba Algorithm.
- Author
-
Perez, J Ayuso
- Subjects
- *
ALGORITHMS , *INTEGERS , *FINITE fields - Abstract
Two algorithms, both based around multiplication, one defined by Andrew Donald Booth in 1950 and the other defined by Anatoly Alexeevitch Karatsuba in 1960 can be applied to other types of operations. We know from recent results that to perform some algebraic transformations it is more efficient to calculate an inverse effect of a smaller magnitude together with an original action with a higher rank, than the initial operation, reaching the final element with less transitions. In this paper, we present an addition algorithm on |$\mathbb {Z}/m\mathbb {Z}$| of big-integer numbers based on these concepts, using an alternative to traditional hardware implementation for binary addition based on FULL-ADDER cells, allowing the reduction of space complexity compared with other techniques, such as carry-lookahead, letting us calculate a modular addition in an optimal order complexity of |$\mathcal {O}(n)$| without adding more complexity due to reduction operations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Service Function Placement Optimization For Cloud Service With End-to-End Delay Constraints.
- Author
-
Yan, Guofeng, Su, Zhengwen, Tan, Hengliang, and Du, Jiao
- Subjects
- *
CLOUD computing , *MARKOV processes , *STOCHASTIC analysis , *ALGORITHMS , *LOAD balancing (Computer networks) - Abstract
Network function virtualization (NFV) has been proposed to enable flexible management and deployment of the network service in cloud. In NFV architecture, a network service needs to invoke several service functions (SFs) in a particular order following the service chain function. The placement of SFs has significant impact on the performance of network services. However, stochastic nature of the network service arrivals and departures as well as meeting the end-to-end Quality of Service(QoS) makes the SFs placement problem even more challenging. In this paper, we firstly provide a system architecture for the SFs placement of cloud service with end-to-end QoS deadline. We then formulate the end-to-end service placement as a Markov decision process (MDP) which aims to minimize the placement cost and the end-to-end delay. In our MDP, the end-to-end delay of active services in the network is considered to be the state of the system, and the placement (nonplacement or placement) of SF is considered as the action. Also, we discuss the rationality of our analytical model by analyzing the Markov stochastic property of the end-to-end service placement. To obtain the optimal placement policy, we then propose an algorithm (Algorithm 1) for dynamic SFs placement based on our model and use successive approximations, i.e. |$\epsilon $| -iteration algorithm (Algorithm 2) to obtain action distribution. Finally, we evaluate the proposed MDP by comparing our optimal method with DDQP, DRL-QOR, MinPath and MinDelay for QoS optimization, including acceptance probability, average delay, resource utilization, load-balancing and reliability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. An Improved Density Peaks Clustering Algorithm Based On Density Ratio.
- Author
-
Zou, Yujuan, Wang, Zhijian, Xu, Pengfei, and Lv, Taizhi
- Subjects
- *
ALGORITHMS , *K-nearest neighbor classification , *CLUSTERING of particles , *COLLOIDAL stability , *DECISION trees - Abstract
Density peaks clustering (DPC) is a relatively new density clustering algorithm. It is based on the idea that cluster centers always have relatively high local densities and are relatively far from the points with higher densities. With the aforementioned idea, a decision graph can be drawn, and cluster centers will be chosen easily with the aid of the decision graph. However, the algorithm has its own weaknesses. Because the algorithm calculates local density and allocates points based on the distances between certain points, the algorithm has difficulty in classifying points into proper groups with varying densities or nested structures. This paper proposes an improved density peaks clustering algorithm called Dratio-DPC to overcome this weakness. First, Dratio-DPC adjusts the original local density with a coefficient calculated with the density ratio. Second, Dratio-DPC takes density similarity into consideration to calculate the distances between one point and other points with higher local densities. We design and perform experiments on different benchmark datasets and compare the clustering results of Dratio-DPC, traditional clustering algorithms and three improved DPC algorithms. Comparison results show that Dratio-DPC is effective and applicable to a wider range of scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. MBSO Algorithm For Handling Energy-Throughput Trade-Off In Cognitive Radio Networks.
- Author
-
Ramchandran, M and Ganesh, E N
- Subjects
COGNITIVE radio ,RADIO networks ,ALGORITHMS ,ERROR rates ,BRAINSTORMING ,EVOLUTIONARY algorithms - Abstract
Cognitive Radio network (CRN) depends on opportunistic spectrum access and spectrum sensing for improving the wireless networks' spectrum efficiency. Since throughput maximization can result in high-energy consumption, the spectrum sensing technique should address the energy-throughput trade-off. The spectrum sensing time has to be determined by the considering the residual battery energies of each secondary user (SU). The primary user (PU) interference degrades the throughput of the entire network. Hence, the transmit power level should be determined by considering the PU interference and SU battery energy. This paper proposes the multi-objective brain storm optimization (MBSO) algorithm for handling energy-throughput trade-off in CRN. In this work, the sensing time is adaptively determined based on the residual battery energy of SUs, and the transmit power is determined based on the energy level of the PU signal and the residual battery energy of the SUs. A multi-objective optimization problem is formulated in order to maximize the throughput and minimize the packet error rate (PER) and is solved by applying the MBSO algorithm. Experimental results show that MBSO attains improved throughput, higher residual energy with lesser PER. The spectrum sensing performance is enhanced with higher probability of detection. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Hidden Markov Model-based Load Balancing in Data Center Networks.
- Author
-
He, Binjie, Zhang, Dong, and Zhao, Chang
- Subjects
SERVER farms (Computer network management) ,HIDDEN Markov models ,ALGORITHMS ,ACCURACY of information ,LOAD balancing (Computer networks) - Abstract
Modern data centers provide multiple parallel paths for end-to-end communications. Recent studies have been done on how to allocate rational paths for data flows to increase the throughput of data center networks. A centralized load balancing algorithm can improve the rationality of the path selection by using path bandwidth information. However, to ensure the accuracy of the information, current centralized load balancing algorithms monitor all the link bandwidth information in the path to determine the path bandwidth. Due to the excessive link bandwidth information monitored by the controller, however, much time is consumed, which is unacceptable for modern data centers. This paper proposes an algorithm called hidden Markov Model-based Load Balancing (HMMLB). HMMLB utilizes the hidden Markov Model (HMM) to select paths for data flows with fewer monitored links, less time cost, and approximate the same network throughput rate as a traditional centralized load balancing algorithm. To generate HMMLB, this research first turns the problem of path selection into an HMM problem. Secondly, deploying traditional centralized load balancing algorithms in the data center topology to collect training data. Finally, training the HMM with the collected data. Through simulation experiments, this paper verifies HMMLB's effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Quantum Attacks on 1K-AES and PRINCE.
- Author
-
Cai, Bin-Bin, Wu, Yusen, Dong, Jing, Qin, Su-Juan, Gao, Fei, and Wen, Qiao-Yan
- Subjects
PRINCES ,CIPHERS ,ALGORITHMS ,PROBABILITY theory ,MEMORY - Abstract
By introducing the BHT algorithm into the slide attack on 1K-AES and the related-key attack on PRINCE, we present the corresponding quantum attacks in this paper. In the proposed quantum attacks, we generalize the BHT algorithm to the situation where the number of marked items is unknown ahead of time. Moreover, we give an implementation scheme of classifier oracle based on Quantum Phase Estimation algorithm in presented quantum attacks. The complexity analysis shows that the query complexity, time complexity and memory complexity of the presented quantum attacks are all |$\mathcal{O}(2^{n/3})$| when the success probability is about |$63\%$| , where |$n$| is the block size. Compared with the corresponding classical attacks, the proposed quantum attacks can achieve subquadratic speed-up under the same success probability no matter on query complexity, time complexity or memory complexity. Furthermore, the query complexity of the proposed quantum slide attack on 1K-AES is less than Grover search on 1K-AES by a factor of |$2^{n/6}.$| When compared with the Grover search on PRINCE, the query complexity of the presented quantum attack on PRINCE is reduced from |$\mathcal{O}(2^{n})$| to |$\mathcal{O}(2^{n/2}).$| When compared with the combination of Grover and Simon's algorithms on PRINCE, the query complexity of our quantum attack on PRINCE is reduced from |$\mathcal{O}(n\cdot 2^{n/2})$| to |$\mathcal{O}(2^{n/2}).$| Besides, the proposed quantum slide attack on 1K-AES indicates that the quantum slide attack could also be applied on Substitution-Permutation Network construction, apart from the iterated Even-Mansour cipher and Feistel constructions. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. sasa: a SimulAtor of Self-stabilizing Algorithms.
- Author
-
Altisen, Karine, Devismes, Stéphane, and Jahier, Erwan
- Subjects
ALGORITHMS ,STOCHASTIC models - Abstract
In this paper, we present sasa , an open-source SimulAtor of Self-stabilizing Algorithms. Self-stabilization defines the ability of a distributed algorithm to recover after transient failures. sasa is implemented as a faithful representation of the atomic-state model (also called the locally shared memory model with composite atomicity). This model is the most commonly used one in the self-stabilizing area to prove both the correct operation of self-stabilizing algorithms and complexity bounds on them. sasa encompasses all features necessary to debug, test and analyze self-stabilizing algorithms. All these facilities are programmable to enable users to accommodate to their particular needs. For example, asynchrony is modeled by programmable stochastic daemons playing the role of input sequence generators. Properties of algorithms can be checked using formal test oracles. The sasa distribution also provides several facilities to easily achieve (batch-mode) simulation campaigns. We show that the lightweight design of sasa allows to efficiently perform huge such campaigns. Following a modular approach, we have aimed at relying as much as possible the design of sasa on existing tools, including ocaml , dot and several tools developed in the Synchrone Group of the VERIMAG laboratory. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. A New Approach for Resource Recommendation in the Fog-Based IoT Using a Hybrid Algorithm.
- Author
-
Xu, Zhiwang, Qin, Huibin, Yang, Shengying, and Arefzadeh, Seyedeh Maryam
- Subjects
SERVICE-oriented architecture (Computer science) ,INTERNET of things ,ALGORITHMS ,RECOMMENDER systems ,BEES algorithm ,FOG ,PARTICLE swarm optimization - Abstract
Internet of things (IoT) is an architecture of connected physical objects; these objects can communicate with each other and transmit and receive data. Also, fog-based IoT is a distributed platform that provides reliable access to virtualized resources based on various technologies such as high-performance computing and service-oriented design. A fog recommender system is an intelligent engine that suggests suitable services for fog users with less answer time and more accuracy. With the rapid growth of files and information sharing, fog recommender systems' importance is also increased. Besides, the resource management problem appears challenging in fog-based IoT because of the fog's unpredictable and highly variable environment. However, many current methods suffer from the low accuracy of fog recommendations. Due to this problem's Non-deterministic Polynomial-time (NP)-hard nature, a new approach is presented for resource recommendation in the fog-based IoT using a hybrid optimization algorithm. To simulate the suggested method, the CloudSim simulation environment is used. The experimental results show that the accuracy is optimized by about 1–8% compared with the Cooperative Filtering method utilizing Smoothing and Fusing and Artificial Bee Colony algorithm. The outcomes of the present paper are notable for scholars, and they supply insights into subsequent study domains in this field. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. A Hybrid Strategy Improved Whale Optimization Algorithm for Web Service Composition.
- Author
-
Ju, Chuanxiang, Ding, Hangqi, and Hu, Benjia
- Subjects
WEB services ,MATHEMATICAL optimization ,QUALITY of service ,DIFFERENTIAL evolution ,PROBLEM solving ,METAHEURISTIC algorithms ,ALGORITHMS - Abstract
With the rapid growth of the number of web services on the Internet, various service providers provide many similar services with the same function but different quality of service (QoS) attributes. It is a key problem to be solved urgently to select the service composition quickly, meeting the users' QoS requirements from many candidate services. Optimization of web service composition is an NP-hard issue and intelligent optimization algorithms have become the mainstream method to solve this complex problem. This paper proposed a hybrid strategy improved whale optimization algorithm, which is based on the concepts of chaos initialization, nonlinear convergence factor and mutation. By maintaining a balance between exploration and exploitation, the problem of slow or early convergence is overcome to a certain extent. To evaluate its performance more accurately, the proposed algorithm was first tested on a set of standard benchmarks. After, simulations were performed using the real quality of web service dataset. Experimental results show that the proposed algorithm is better than the original version and other meta-heuristic algorithms on average, as well as verifies the feasibility and stability of web service composition optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Trust-aware Task Allocation in Collaborative Crowdsourcing Model.
- Author
-
Donglai, Fu and Yanhua, Liu
- Subjects
CROWDSOURCING ,QUALITY of service ,TASKS ,WORKFLOW ,ALGORITHMS ,WORKFLOW management ,WORKFLOW management systems - Abstract
Task allocation plays a vital role in crowd computing by determining its performance. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Task allocation aims to maximize the completion quality of the entire workflow and minimize its total cost. Trust can affect the quality of the produced results and costs. Selecting workers with high levels of trust could provide better solution to the workflow and increase the budget. Crowdsourcing workflow needs to balance the two conflicting objectives. In this paper, we propose an alternative greedy approach with four heuristic strategies to address the issue. In particular, the proposed approach aims to monitor the current status of workflow execution and use heuristic strategies to adjust the parameters of task allocation. We design a two-phase allocation model to accurately match the tasks with workers. T-Aware allocates each task to the worker that maximizes the trust level, while minimizing the cost. We conduct extensive experiments to quantitatively evaluate the proposed algorithms in terms of running time, task failure ratio, trust and cost using a customer objective function on WorkflowSim, a well-known cloud simulation tool. Experimental results based on real-world workflows show that T-Aware outperforms other optimal solutions on finding the tradeoff between trust and cost, which is 3 to 6% better than the best competitor algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Optimal Text Document Clustering Enabled by Weighed Similarity Oriented Jaya With Grey Wolf Optimization Algorithm.
- Author
-
Venkanna, Gugulothu and Bharati, Dr K F
- Subjects
DOCUMENT clustering ,MATHEMATICAL optimization ,WOLVES ,ALGORITHMS ,INFORMATION retrieval ,CENTROID ,GENETIC algorithms ,PARTICLE swarm optimization - Abstract
Owing to scientific development, a variety of challenges present in the field of information retrieval. These challenges are because of the increased usage of large volumes of data. These huge amounts of data are presented from large-scale distributed networks. Centralization of these data to carry out analysis is tricky. There exists a requirement for novel text document clustering algorithms, which overcomes challenges in clustering. The two most important challenges in clustering are clustering accuracy and quality. For this reason, this paper intends to present an ideal clustering model for text document using term frequency–inverse document frequency, which is considered as feature sets. Here, the initial centroid selection is much concentrated which can automatically cluster the text using weighted similarity measure in the proposed clustering process. In fact, the weighted similarity function involves the inter-cluster, and intra-cluster similarity of both ordered and unordered documents, which is used to minimize weighted similarity among the documents. An advanced model for clustering is proposed by the hybrid optimization algorithm, which is the combination of the Jaya Algorithm (JA) and Grey Wolf Algorithm (GWO), and so the proposed algorithm is termed as JA-based GWO. Finally, the performance of the proposed model is verified through a comparative analysis with the state-of-the-art models. The performance analysis exhibits that the proposed model is 96.56% better than genetic algorithm, 99.46% better than particle swarm optimization, 97.09% superior to Dragonfly algorithm, and 96.21% better than JA for the similarity index. Therefore, the proposed model has confirmed its efficiency through valuable analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Specifying and Model Checking Distributed Control Algorithms at Meta-level.
- Author
-
Doan, Ha Thi Thu and Ogata, Kazuhiro
- Subjects
ALGORITHMS ,DISTRIBUTED algorithms ,LOGIC - Abstract
This paper proposes an approach to the specification and model checking of a large, important class of distributed algorithms called control algorithms (CAs), which are superimposed on underlying distributed systems (UDSs). The approach is based on rewriting logic by moving from its object level to the meta-level. We introduce the idea of specifying CAs as meta-programs that take the specifications of UDSs and automatically generate the specifications of the UDSs on which the CAs are superimposed (UDS-CAs). Due to many options, such as network topologies, even fixing the number of each kind of entities, such as mobile support stations (MSSs) and mobile hosts (MHs) in a mobile checkpointing algorithm, there are many instances of a UDS. To address the problem, we generate all possible initial states of a UDS for a fixed number of each kind of entities such that some constraints, such as MSSs strongly connected with a wired network, are fulfilled and conduct model checking for each of the initial states. We demonstrate the usefulness by reporting on a case study where a counterexample is found for some specific initial states but not for the other initial states, detecting a subtle flaw lurking in a mobile checkpointing algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Watson–Crick Context-Free Grammars: Grammar Simplifications and a Parsing Algorithm.
- Author
-
Zulkufli, Nurul Liyana Mohamad, Turaev, Sherzod, Tamrin, Mohd Izzuddin Mohd, and Messikh, Azeddine
- Subjects
GRAMMAR ,FORMAL languages ,ALGORITHMS ,MOLECULAR computers ,NORMAL forms (Mathematics) - Abstract
A Watson–Crick (WK) context-free grammar, a context-free grammar with productions whose right-hand sides contain nonterminals and double-stranded terminal strings, generates complete double-stranded strings under Watson–Crick complementarity. In this paper, we investigate the simplification processes of Watson–Crick context-free grammars, which lead to defining Chomsky-like normal form for Watson–Crick context-free grammars. The main result of the paper is a modified CYK (Cocke–Younger–Kasami) algorithm for Watson–Crick context-free grammars in WK-Chomsky normal form, allowing to parse double-stranded strings in O (n 6) time. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Difference-Comparison-based Malicious Meter Inspection in Neighborhood Area Networks in Smart Grid.
- Author
-
XIAOFANG XIA, WEI LIANG, YANG XIAO, and MENG ZHENG
- Subjects
CYBERTERRORISM ,SMART power grids ,ELECTRICITY ,ELECTRIC power distribution grids ,ALGORITHMS - Abstract
As the smart meters are vulnerable to physical attacks as well as cyber attacks, electricity theft in smart grids is much easier to commit and more difficult to detect than that in traditional power grids. In this paper, to facilitate the inspection of the malicious meters, a full and complete binary inspection tree whose leaves stand for smart meters is employed as a logical structure. We can logically configure an inspector (a meter for detection) at any node on the tree. By calculating the difference between the inspector’s reading and the summation of the readings reported from the smart meters on the subtree of one node, as well as the difference between the total amount of stolen electricity on the subtrees of an internal node and its left child, we propose a difference-comparison-based inspection algorithm which allows the inspector to skip a large number of nodes on the tree and hence accelerates the detection speed of the malicious meters remarkably. Furthermore, for quickly identifying a complete set of malicious meters, we propose an adaptive reporting mechanism which adopts much shorter reporting periods during the inspection process. Analysis with proofs about the performance bounds of the proposed algorithm in terms of the number of inspection steps is provided. Simulations not only validate the theoretical analysis, but also show the superiority of the proposed algorithm over the existing works in terms of inspection steps, regardless of the ratio and the permutation of malicious meters. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Improved Meet-in-the-Middle Attacks on Reduced-Round Tweakable Block Cipher Deoxys-BC.
- Author
-
Li, Manman and Chen, Shaozhen
- Subjects
BLOCK ciphers ,ALGORITHMS - Abstract
Deoxys-BC is an internal tweakable block cipher of the authenticated encryption algorithm Deoxys, which is a third-round finalist in the CAESAR competition. In this paper, we study the property of Deoxys-BC, such as the subtweakey difference cancelation and the freedom of the tweak. Combining the differential enumeration technique with these properties, the authors achieve the key-recovery attacks on Deoxys-BC under the meet-in-the-middle attack. As a result, we get an attack on 9-round Deoxys-BC-128-128 by constructing a 6-round meet-in-the-middle distinguisher with |$2^{113}$| plaintext–tweak combinations, |$2^{97}$| Deoxys-BC blocks and |$2^{121.6}$| 9-round Deoxys-BC-128-128 encryptions. We also present an attack on 11-round Deoxys-BC-256-128 for the first time by constructing a 7-round meet-in-the-middle distinguisher with |$2^{113}$| plaintext-tweak combinations, |$2^{226}$| Deoxys-BC blocks and |$2^{251}$| 11-round Deoxys-BC-256-128 encryptions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. New Code-Based Blind Signature Scheme.
- Author
-
Chen, Siyuan, Zeng, Peng, and Choo, Kim-Kwang Raymond
- Subjects
QUANTUM computers ,QUANTUM computing ,DIGITAL signatures ,CODING theory ,ALGORITHMS - Abstract
Blind signature is an important cryptographic primitive with widespread applications in secure e-commerce, for example to guarantee participants' anonymity. Existing blind signature schemes are mostly based on number-theoretic hard problems, which have been shown to be solvable with quantum computers. The National Institute of Standards and Technology (NIST) began in 2017 to specify a new standard for digital signatures by selecting one or more additional signature algorithms, designed to be secure against attacks carried out using quantum computers. However, none of the third-round candidate algorithms are code-based, despite the potential of code-based signature algorithms in resisting quantum computing attacks. In this paper, we construct a new code-based blind signature (CBBS) scheme as an alternative to traditional number-theoretic based schemes. Specifically, we first extend Santoso and Yamaguchi's three pass identification scheme to a concatenated version (abbreviated as the CSY scheme). Then, we construct our CBBS scheme from the CSY scheme. The security of our CBBS scheme relies on hardness of the syndrome decoding problem in coding theory, which has been shown to be NP-complete and secure against quantum attacks. Unlike Blazy et al.'s CBBS scheme which is based on a zero-knowledge protocol with cheating probability |$2/3$| , our CBBS scheme is based on a zero-knowledge protocol with cheating probability |$1/2$|. The lower cheating probability would reduce the interaction rounds under the same security level and thus leads to a higher efficiency. For example, to achieve security level |$2^{-82}$| , the signature size in our CBBS scheme is |$1.63$| MB compared to |$3.1$| MB in Blazy et al.'s scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Byte2vec: Malware Representation and Feature Selection for Android.
- Author
-
Yousefi-Azar, Mahmood, Hamey, Len, Varadharajan, Vijay, and Chen, Shiping
- Subjects
FEATURE selection ,MALWARE ,MALWARE prevention ,INTEREST rates ,ALGORITHMS - Abstract
Malware detection based on static features and without code disassembling is a challenging path of research. Obfuscation makes the static analysis of malware even more challenging. This paper extends static malware detection beyond byte level |$n$| -grams and detecting important strings. We propose a model (Byte2vec) with the capabilities of both binary file feature representation and feature selection for malware detection. Byte2vec embeds the semantic similarity of byte level codes into a feature vector (byte vector) and also into a context vector. The learned feature vectors of Byte2vec, using skip-gram with negative-sampling topology, are combined with byte-level term-frequency (tf) for malware detection. We also show that the distance between a feature vector and its corresponding context vector provides a useful measure to rank features. The top ranked features are successfully used for malware detection. We show that this feature selection algorithm is an unsupervised version of mutual information (MI). We test the proposed scheme on four freely available Android malware datasets including one obfuscated malware dataset. The model is trained only on clean APKs. The results show that the model outperforms MI in a low-dimensional feature space and is competitive with MI and other state-of-the-art models in higher dimensions. In particular, our tests show very promising results on a wide range of obfuscated malware with a false negative rate of only 0.3% and a false positive rate of 2.0%. The detection results on obfuscated malware show the advantage of the unsupervised feature selection algorithm compared with the MI-based method. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Algorithms Based on Path Contraction Carrying Weights for Enumerating Subtrees of Tricyclic Graphs.
- Author
-
Yang, Yu, Chen, Beifang, Zhang, Guoping, Li, Yongming, Sun, Daoqiang, and Liu, Hongbo
- Subjects
LIFTING & carrying (Human mechanics) ,INDEX numbers (Economics) ,GENERATING functions ,ALGORITHMS ,TREE graphs - Abstract
The subtree number index of a graph, defined as the number of subtrees, attracts much attention recently. Finding a proper algorithm to compute this index is an important but difficult problem for a general graph. Even for unicyclic and bicyclic graphs, it is not completely trivial, though it can be figured out by try and error. However, it is complicated for tricyclic graphs. This paper proposes path contraction carrying weights (PCCWs) algorithms to compute the subtree number index for the nontrivial case of bicyclic graphs and all 15 cases of tricyclic graphs, based on three techniques: PCCWs, generating function and structural decomposition. Our approach provides a foundation and useful methods to compute subtree number index for graphs with more complicated cycle structures and can be applied to investigate the novel structural property of some important nanomaterials such as the pentagonal carbon nanocone. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Energy-Efficient Step-Counting Algorithm for Smartphones.
- Author
-
Yang, Runze, Song, Jian, Huang, Baoqi, Li, Wuyungerile, and Qi, Guodong
- Subjects
ALGORITHMS ,ENERGY consumption ,INDOOR positioning systems - Abstract
Step counting is not only the key component of pedometers (which is a fundamental service on smartphones), but is also closely related to a range of applications, including motion monitoring, behavior recognition, indoor positioning and navigation. Due to the limited battery capacity of current smartphones, it is of great value to reduce the energy consumption of such a popular service. Therefore, this paper focuses on the energy efficiency of step-counting algorithms. First of all, we formulate a theoretical error model based on the well-known auto-correlation coefficient step-counting (ACSC) algorithm, so as to analyze the factors affecting step-counting accuracy. And then, in light of this model and an adaptive sampling strategy, we propose a novel energy-efficient step-counting algorithm by adaptively substituting the computationally intensive auto-correlation with simple mean absolute deviation. On these grounds, an Android pedometer is implemented. Two individual experiments are carried out and verify both the theoretical error model and the proposed algorithm. It is shown that our algorithm outperforms two famous counterparts, i.e. the original ACSC algorithm and peak detection step-counting algorithm, in terms of both accuracy and energy efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Genetically Based Combination of Visual Saliency and Roughness for FR 3D Mesh Quality Assessment: A Statistical Study.
- Author
-
Nouri, Anass, Charrier, Christophe, and Lézoray, Olivier
- Subjects
STATISTICAL correlation ,TORSION theory (Algebra) ,ALGORITHMS - Abstract
In this paper, we present a full-reference quality assessment metric based on the information of visual saliency. The saliency information is provided under the form of degrees associated to each vertex of the surface mesh. From these degrees, statistical attributes reflecting the structures of the reference and distorted meshes are computed. These are used by four comparisons functions genetically optimized that quantify the structure differences between a reference and a distorted mesh. We also present a statistical comparison study of six full-reference quality assessment metrics for 3D meshes. We compare the objective metrics results with humans subjective scores of quality considering the 3D meshes in one hand and the distorsion types in the other hand. Also, we show which metrics are statistically superior to their counterparts. For these comparisons we use the Spearman Rank Ordered Correlation Coefficient and the hypothetic test of Student (ttest). To attest the pertinence of the proposed approach, a comparison with a ground truth saliency and an application associated to the assessment of the visual rendering of smoothing algorithms are presented. Experimental results show that the proposed metric is very competitive with the state-of-the-art. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. SIFT-Based Visual Tracking using Optical Flow and Belief Propagation Algorithm.
- Author
-
Biswas, Biswajit, Ghosh, Swarup Kr, Hore, Moumita, and Ghosh, Anupam
- Subjects
OPTICAL flow ,TRACKING algorithms ,ALGORITHMS ,AUTONOMOUS robots - Abstract
Perceptible visual tracking acts as an important module for distinct perception tasks of autonomous robots. Better features help in easier decision-making process. The evaluation of tracking objects, dynamic positions and their visual information in results are quite difficult tasks. Until now, most real-time visual tracking algorithms suffer from poor robustness and low occurrence as they deal with complex real-world data. In this paper, we have proposed more robust and faster visual tracking framework using scale invariant feature transform (SIFT) and the optical flow in belief propagation (BF) algorithm for efficient processing in real scenarios. Here, a new feature-based optical flow along with BF algorithm is utilized to compute the affine matrix of a regional center on SIFT key points in frames. Experimental results depict that the proposed approach is more efficient and more robust in comparison with the state-of-the-art tracking algorithms with more complex scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Energy-Aware Routing in Software-Defined Network using Compression.
- Author
-
Giroire, Frédéric, Huin, Nicolas, Moulierac, Joanna, and Phan, Truong Khoa
- Subjects
ENERGY consumption ,CLOUD computing ,WIRELESS communications ,INTERNET of things ,COMPUTER software - Abstract
Software-defined Network (SDN) is a new networking paradigm enabling innovation through network programmability. Over past few years, many applications have been built using SDN such as server load balancing, virtual-machine migration, traffic engineering and access control. In this paper, we focus on using SDN for energy-aware routing (EAR). Since traffic load has a small influence on the power consumption of routers, EAR allows putting unused links into sleep mode to save energy. SDN can collect traffic matrix and then computes routing solutions satisfying QoS while being minimal in energy consumption. However, prior works on EAR have assumed that the SDN forwarding table switch can hold an infinite number of rules. In practice, this assumption does not hold since such flow tables are implemented in Ternary Content Addressable Memory (TCAM) which is expensive and power hungry. We consider the use of wildcard rules to compress the forwarding tables. In this paper, we propose optimization methods to minimize energy consumption for a backbone network while respecting capacity constraints on links and rule space constraints on routers. In details, we present two exact formulations using Integer Linear Program (ILP) and introduce efficient heuristic algorithms. Based on simulations on realistic network topologies, we show that using this smart rule space allocation, it is possible to save almost as much power consumption as the classical EAR approach. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
33. From Consensus to Atomic Broadcast: Time-Free Byzantine-Resistant Protocols without Signatures.
- Author
-
Correia, Miguel, Neves, Nuno Ferreira, and Verssimo, Paulo
- Subjects
COMPUTER network protocols ,DATA encryption ,COMPUTER network security ,CATEGORIES (Mathematics) ,ALGORITHMS ,MATHEMATICAL variables - Abstract
This paper proposes a stack of three Byzantine-resistant protocols aimed to be used in practical distributed systems: multi-valued consensus, vector consensus and atomic broadcast. These protocols are designed as successive transformations from one to another. The first protocol, multi-valued consensus, is implemented on top of a randomized binary consensus and a reliable broadcast protocol. The protocols share a set of important structural properties. First, they do not use digital signatures constructed with public-key cryptography, a well-known performance bottleneck in this kind of protocols. Second, they are time-free, i.e. they make no synchrony assumptions, since these assumptions are often vulnerable to subtle but effective attacks. Third, they are completely decentralized, thus avoiding the cost of detecting corrupt leaders. Fourth, they have optimal resilience, i.e. they tolerate the failure of f = ?(n-1)/3? out of a total of n processes. In terms of time complexity, the multi-valued consensus protocol terminates in a constant expected number of rounds, while the vector consensus and atomic broadcast protocols have O(f) complexity. The paper also proves the equivalence between multi-valued consensus and atomic broadcast in the Byzantine failure model without signatures. A similar proof is given for the equivalence between multi-valued consensus and vector consensus. These two results have theoretical relevance since they show once more that consensus is a fundamental problem in distributed systems. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
34. Improving Human Action Recognition Using Hierarchical Features And Multiple Classifier Ensembles.
- Author
-
Bulbul, Mohammad Farhad, Islam, Saiful, Zhou, Yatong, and Ali, Hazrat
- Subjects
HUMAN behavior ,SUPPORT vector machines ,ALGORITHMS ,MACHINE learning ,STATISTICAL measurement - Abstract
This paper presents a simple, fast and efficacious system to promote the human action classification outcome using the depth action sequences. Firstly, the motion history images (MHIs) and static history images (SHIs) are created from the front (XOY), side (YOZ) and top (XOZ) projected scenes of each depth sequence in a 3D Euclidean space through engaging the 3D Motion Trail Model (3DMTM). Then, the Local Binary Patterns (LBPs) algorithm is operated on the MHIs and SHIs to learn motion and static hierarchical features to represent the action sequence. The motion and static hierarchical feature vectors are then fed into a classifier ensemble to classify action classes, where the ensemble comprises of two classifiers. Thus, each ensemble includes a pair of Kernel-based Extreme Learning Machine (KELM) or |${\mathrm{l}}_{\mathrm{2}}$| -regularized Collaborative Representation Classifier (|${\mathrm{l}}_{\mathrm{2}}$| -CRC) or Multi-class Support Vector Machine. To extensively assess the framework, we perform experiments on a couple of standard available datasets such as MSR-Action3D , UTD-MHAD and DHA. Experimental consequences demonstrate that the proposed approach gains a state-of-the-art recognition performance in comparison with other available approaches. Several statistical measurements on recognition results also indicate that the method achieves superiority when the hierarchical features are adopted with the KELM ensemble. In addition, to ensure real-time processing capability of the algorithm, the running time of major components is investigated. Based on machine dependency of the running time, the computational complexity of the system is also shown and compared with other methods. Experimental results and evaluation of the computational time and complexity reflect real-time compatibility and feasibility of the proposed system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Optimal Feature Selection and Hybrid Classification for Autism Detection in Young Children.
- Author
-
Guruvammal, S, Chellatamilan, T, and Deborah, L Jegatha
- Subjects
FEATURE selection ,AUTISM spectrum disorders ,AUTISM ,ALGORITHMS ,CLASSIFICATION - Abstract
The early detection of autism spectrum disorder acts as a risk in the infants and toddlers as per the increase over the early invention awareness. Hence, this paper has made an effort to introduce a new autism detection technique in young children, which poses three major phases called weighted logarithmic transformation, optimal feature selection and classification. Initially, weighted transformation in the input data is carried out that correctly distinguishes the interclass labels, and it is determined to be the specified features. Because of the presence of numerous amounts of features, the 'prediction' becomes a serious issue, and therefore the optimal selection of features is important. Here, for optimal feature selection process, a new Levi Flight Cub Update-based lion algorithm (LFCU-LA) is introduced that can be a modification in lion algorithm. Once the optimal features are selected, they are given to the classification process that exploits a hybrid classifier: deep belief network (DBN) and neural network (NN). Additionally, the most important contributions in the hidden neurons of DBN and NN were optimally selected that paves way for exact detection. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. LUISA: Decoupling the Frequency Model From the Context Model in Prediction-Based Compression.
- Author
-
Fulber-Garcia, Vinicius and Mergen, Sérgio Luis Sardi
- Subjects
DATA compression ,NATURAL languages ,METADATA ,ALGORITHMS ,ENTROPY (Information theory) - Abstract
Prediction-based compression methods, like prediction by partial matching, achieve a remarkable compression ratio, especially for texts written in natural language. However, they are not efficient in terms of speed. Part of the problem concerns the usage of dynamic entropy encoding, which is considerably slower than the static alternatives. In this paper, we propose a prediction-based compression method that decouples the context model from the frequency model. The separation allows static entropy encoding to be used without a significant overhead in the meta-data embedded in the compressed data. The result is a reasonably efficient algorithm that is particularly suited for small textual files, as the experiments show. We also show it is relatively easy to built strategies designed to handle specific cases, like the compression of files whose symbols are only locally frequent. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. A New Intrusion Detection System Using the Improved Dendritic Cell Algorithm.
- Author
-
Farzadnia, Ehsan, Shirazi, Hossein, and Nowroozi, Alireza
- Subjects
EVOLUTIONARY algorithms ,ALGORITHMS ,DENDRITIC cells - Abstract
The dendritic cell algorithm (DCA) as one of the emerging evolutionary algorithms is based on the behavior of the specific immune agents, known as dendritic cells (DCs). DCA has several potentially beneficial features for binary classification problems. In this paper, we aim at providing a new version of this immune-inspired mechanism acts as a semi-supervised classifier, which can be a defensive shield in network intrusion detection problem. Till now, no strategy or idea has been adopted on the |$Get_{Antigen()}$| function on the detection phase, but random sampling entails the DCA to provide undesirable results in several cycles at each time. This leads to uncertainty. Whereas it must be accomplished by biological behaviors of DCs in peripheral tissues, we have proposed a novel strategy that exactly acts based on its immunological functionalities of dendritic cells. The proposed mechanism focuses on two items: first, to obviate the challenge of needing to have a preordered antigen set for computing danger signal, and the second, to provide a novel immune-inspired idea for nonrandom data sampling. A variable functional migration threshold is also computed cycle by cycle that shows the necessity of the migration threshold flexibility. A significant criterion so-called capability of intrusion detection (CID) is used for tests. All the tests have been performed in a new benchmark dataset named UNSW-NB15. Experimental consequences demonstrate that the present schema as the best version among improved DC algorithms achieves 76.69% CID by 90% accuracy and outperforms its counterpart methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Precise Point Set Registration Based on Feature Fusion.
- Author
-
Liu, Yuying, Du, Shaoyi, Cui, Wenting, Wang, Xijing, Mou, Qingnan, Zhao, Jiamin, Guo, Yucheng, and Zhang, Yong
- Subjects
RECORDING & registration ,FEATURE extraction ,ALGORITHMS ,PRINCIPAL components analysis ,POINT set theory - Abstract
This paper proposed a novel precise point set registration method based on feature fusion for three-dimensional data. Firstly, for the prominent foreground with dense and continuous cluster structure, we propose an automatic extraction method combining the principal component analysis projection and density-based clustering method. Secondly, for point sets containing noises, we introduce correntropy measurement into registration to weaken their influence. Thirdly, for the precise registration of uneven distribution of points in the same point set, we propose a feature fusion based algorithm which is distribution specific, using point-to-point measurement for densely distributed foreground and point-to-plane measurement for sparsely distributed background, in case that only one measurement method is used for the whole point set the registration gets trapped into local extremum. Finally, we give the optimization algorithm of the proposed method. We conduct experiments on real orthodontics scenes to verify the effectiveness of our proposed feature extraction method and registration algorithm, and experimental results demonstrate that both the proposed solutions are proper for their respective tasks than other existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. One-to-Many Node-Disjoint Paths Routing in Dense Gaussian Networks.
- Author
-
Alsaleh, Omar, Bose, Bella, and Hamdaoui, Bechir
- Subjects
GAUSSIAN processes ,DISTRIBUTION (Probability theory) ,STOCHASTIC processes ,ALGORITHMS ,MATHEMATICAL programming - Abstract
In this paper, an efficient constant time complexity algorithm that constructs node disjoint paths (NDP) from a single source node to the maximum number of destination nodes in dense Gaussian networks is given. Then, it is proved that this algorithm always returns a solution. Also, the lower and upper bounds of the sum of the NDP lengths are derived. Finally, via execution of the algorithm, it is shown that on the average the sum of lengths of NDP given by the algorithm is only ∼10% more than the sum of the shortest paths lengths. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. An Efficient Skip-Search Approach to Swap Matching.
- Author
-
Faro, Simone and Pavone, Arianna
- Subjects
QUEUEING networks ,ELECTRONIC commerce ,INTERNET ,ALGORITHMS ,GENERALIZATION - Abstract
The swap matching problem consists in finding all occurrences of a pattern x of length m in a text y of length n, allowing for disjoint local swaps of characters in the pattern. In 2003, Amir et al. solved the problem in O (n log m log σ) worst-case time complexity, where σ is the size of the alphabet. In recent years, much research has focused on practical solutions and efficient algorithms have been devised by means of the bit-parallel simulation of non-deterministic automata. In this paper, we present a new efficient algorithm for the swap matching problem based on character comparison and structured as a generalization of the Skip-Search algorithm for the exact string matching problem. Although our solution has a quadratic worst-case time complexity, it shows a sub-linear behaviour on average. According to experimental results, our algorithm obtains in most practical cases the best running times, when compared against the most effective solutions. The gain in speed-up, in terms of running times, is up to 48%. This makes the new algorithm one of the most efficient solutions in practical cases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. Hierarchical Approach to Detect Fractures in CT DICOM Images.
- Author
-
LINDA, C. HARRIET and JIJI, G. WISELIN
- Subjects
DIAGNOSIS of bone fractures ,PHYSICIANS ,COMPUTED tomography ,RADIOLOGISTS ,ALGORITHMS - Abstract
This paper deals with the identification of fractures in CT scan images. A large amount of significant and critical information are normally stored in medical data. Highly efficient and automated computational methods are needed to process and analyze all available data in order to help the physician in diagnosis, decisions and treatment planning. Each CT scan includes a large number of slices. In this paper, a new hierarchical segmentation algorithm is applied to all slices; it automatically extracts the bone structures using HMRF_EM segmentation method. The template-matching technique is employed to extract the affected portion. This new approach is experimented with eight patients' data and validated by radiologists. The performance of the work is analyzed and compared with recent works using sensitivity, specificity and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. A True-Concurrency Encoding for BMC of Compositional Systems.
- Author
-
Yin, Liangze, Dong, Wei, He, Fei, and Wang, Ji
- Subjects
INVARIANTS (Mathematics) ,ALGORITHMS ,DATA mining ,NUMERICAL analysis ,MATHEMATICAL analysis - Abstract
This paper studies Bounded Model Checking (BMC) of invariant properties on compositional systems. To alleviate the path explosion problem resulting from interleaving, an ideal approach is to let the system execute in true-concurrency. However, since it is difficult for the true-concurrency execution manner to obtain all reachable global states, this technique has been rarely employed to verify those properties requiring to check all reachable global states-such as invariant properties. Verification of such properties still adheres to the interleaving semantics. Observed that even for properties such as invariants, it is possible to verify them via checking only a fraction of the global states, this paper presents a true-concurrency encoding for invariant property verification of compositional systems. The crucial innovation is a macro-step technique, which executes a sequence of consecutive transitions in true-concurrency. With this technique, we are able to (1) significantly reduce the exponential number of paths due to interleaving and (2) greatly cut down the number of SAT calls required for BMC to verify the property. Experimental results of real problems show speed increases from 4.8 to 2957 times that of the standard verification method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
43. A proposed framework for cloud-aware multimodal multimedia big data analysis toward optimal resource allocation.
- Author
-
Sasikala, S, Gomathi, S, Geetha, V, and Murali, L
- Subjects
RESOURCE allocation ,BIG data ,DATA analysis ,ALGORITHMS ,STRUCTURAL design ,MULTIMODAL user interfaces - Abstract
The main goal of this paper is to demonstrate the structural design of Multimodal Multimedia Services in Cloud Platform (MMSCP). Thus, our proposed MMSCP architecture is built of three levels such as, Service Platform, Execution Platform and Structural platform. The functionality of service platform is to gather different forms of video files generated by the media creators and to store these files on the local platform. The second execution platform integrates both the Hadoop and Mapreduce functionalities. Finally, QoS based cloud computing functionalities (i.e. load balancing, security, resource allocation and network traffic management) is employed at the third structural platform. Likely, we introduced the Crow Search Algorithm (CSA) in structural platform for optimal allocation of resources. We adapt a Hadoop cluster to perform the experiment. Also, to conduct the resource allocation experiment we used some of the conventional optimization algorithms such as, ABC, GA and PSO for comparison with our proposed CSA algorithm imposed on the structural platform. However, to evaluate the performance of the algorithms we configured the CloudAnalyst tool. The simulation results illustrate that the proposed algorithm can allocate the virtual machine (VM) optimally to attain a minimal response time. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. CMT: An Efficient Algorithm for Scalable Packet Classification.
- Author
-
Chen, Shuhui, Zhong, Jincheng, Huang, Teng, Wei, Ziling, and Zhao, Shuang
- Subjects
ALGORITHMS ,SOFTWARE-defined networking ,MAGNITUDE (Mathematics) ,CLASSIFICATION ,QUALITY of service - Abstract
Packet classification plays an essential role in diverse network functions such as quality of service, firewall filtering and load balancer. However, implementing an efficient packet classifier is a challenging problem. The problem even gets worse in the era of software-defined network, in which frequent rule updates are performed, and complex flow tables are used. This paper proposes CMT, a new software algorithm named by its novel data structure—common mask tree—to implement an efficient multi-field packet classifier. The core idea of CMT is to combine the strengths of both decision-tree and tuple-space schemes by employing tree-like structures and hash tables simultaneously. The objective of CMT is to achieve both high classification performance and fast rule updates. In the evaluation section, CMT is compared with decision-tree and tuple-space schemes. Compared to the state-of-the-art decision-tree methods, CMT performs rule updates at two orders of magnitude faster. CMT has a stable performance on different rulesets and achieves a 40% improvement in memory access compared to the state-of-the-art tuple-space method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. Covering Array Constructors: An Experimental Analysis of Their Interaction Coverage and Fault Detection.
- Author
-
Huang, Rubing, Chen, Haibo, Zhou, Yunan, Chen, Tsong Yueh, Towey, Dave, Lau, Man Fai, Ng, Sebastian, Merkel, Robert, and Chen, Jinfu
- Subjects
ALGORITHMS - Abstract
Combinatorial interaction testing (CIT) aims at constructing a covering array (CA) of all value combinations at a specific interaction strength, to detect faults that are caused by the interaction of parameters. CIT has been widely used in different applications, with many algorithms and tools having been proposed to support CA construction. To date, however, there appears to have been no studies comparing different CA constructors when only some of the CA test cases are executed. In this paper, we present an investigation of five popular CA constructors: ACTS , Jenny , PICT , CASA and TCA. We conducted empirical studies examining the five programs, focusing on interaction coverage and fault detection. The experimental results show that when there is no preference or special justification for using other CA constructors, then Jenny is recommended—because it achieves better interaction coverage and fault detection than the other four constructors in many cases. Our results also show that when using ACTS or CASA , their CAs must be prioritized before testing. The main reason for this is that these CAs can result in considerable interaction coverage or fault detection capabilities when executing a large number of test cases; however, they may also produce the lowest rates of fault detection and interaction coverage. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Performance Analysis of Prioritization and Contention Control Algorithm in Wireless Body Area Networks.
- Author
-
B, Nithya, Ranjan, Naveen, and A, Justin Gopinath
- Subjects
BODY area networks ,ROUTING algorithms ,ALGORITHMS ,NETWORK performance ,MARKOV processes ,ENERGY consumption - Abstract
A Wireless Body Area Network (WBAN) is the composition of a group of energy-efficient, miniature, invasive/non-invasive, light-weighted sensors that monitor human body health conditions for early detection and treatment for life-threatening diseases. Due to the stringent demands of WBAN, such as energy efficiency, reliability and low delay, the development of an efficient contention control algorithm is exceptionally crucial that aims to maximize throughput by reducing collisions. In this context, this paper proposes an adaptive algorithm, namely, Prioritization and Contention Control (PCC) algorithm, to minimize collisions, latency and energy consumption. The first phase of the proposed algorithm prioritizes sensors using run-time metrics to grant channel access only for the potential nodes to send their data. It leads to a lesser number of collisions among sensors, thereby reducing retransmission attempts. In the second phase, the Contention Window (CW) size is predicted using queue length and collision rate that accurately mimic the current channel status. The dynamic estimation of CW aids in minimizing channel access delay, collisions and energy consumption, thereby enhancing overall network performance. The performance of the proposed PCC algorithm is validated with the 2D Markov model and NS2 simulation in terms of throughput, packet delivery ratio, delay and remaining energy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
47. A Guess-And-Determine Attack On SNOW-V Stream Cipher.
- Author
-
Jiao, Lin, Li, Yongqiang, and Hao, Yonglin
- Subjects
STREAM ciphers ,MOBILE communication systems ,FINITE state machines ,SHIFT registers ,ALGORITHMS ,DYNAMIC programming - Abstract
The 5G mobile communication system is coming with a main objective, known also as IMT-2020, that intends to increase the current data rates up to several gigabits per second. To meet an accompanying demand of the super high-speed encryption, EIA and EEA algorithms face some challenges. The 3GPP standardization organization expects to increase the security level to 256-bit key length, and the international cryptographic field responds actively in cipher designs and standard applications. SNOW-V is such a proposal offered by the SNOW family design team, with a revision of the SNOW 3G architecture in terms of linear feedback shift register (LFSR) and finite state machine (FSM), where the LFSR part is new and operates eight times the speed of the FSM, consisting of two shift registers and each feeding into the other, and the FSM increases to three 128-bit registers and employs two instances of full AES encryption round function for update. It takes a 128-bit IV, employs 896-bit internal state and produces 128-bit keystream blocks. The result is competitive in pure software environment, making use of both AES-NI and AVX acceleration instructions. Thus, the security evaluation of SNOW-V is essential and urgent, since there is scarcely any definite security bound for it. In this paper, we propose a byte-based guess-and-determine attack on SNOW-V with complexity |$2^{406}$| using only seven keystream blocks. We first improve the heuristic guessing-path auto-searching algorithm based on dynamic programming by adding initial guessing set, which is iteratively modified by sieving out the unnecessary guessing variables, in order to correct the guessing path according to the cipher structure and finally launch smaller guessing basis. For the specific design, we split all the computing units into bytes and rewrite all the internal operations correspondingly. We establish a backward-clock linear equation system according to the circular construction of the LFSR part. Then we further simplify the equations to adapt to the input requirements of the heuristic guessing-path auto-searching algorithm. Finally, the derived guessing path needs modification for the pre-simplification and post-reduction. This is the first complete guess-and-determine attack on SNOW-V as well as the first specific security evaluation to the full cipher. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. The Construction of a Majority-Voting Ensemble Based on the Interrelation and Amount of Information of Features.
- Author
-
Aydın, Fatih and Aslan, Zafer
- Subjects
ALGORITHMS ,MACHINE learning ,SEARCH algorithms ,CONSTRUCTION - Abstract
In this paper, we introduced a new ensemble learning algorithm called VIBES, which is better in terms of performance when compared to 85 machine learning algorithms in WEKA tool. This new algorithm is based on three major processes: (i) making an assumption regarding whether features are dependent on or independent of each other, (ii) computing the amount of information of features when it is assumed that they are dependent on each other and then sorting them in a descending manner based on the amount of information, (iii) speeding up the algorithm by optimizing the forward search algorithm that is used in the construction of the final hypothesis from base learner hypotheses. As a result of these processes, it has been seen in the experiments that choosing the relevant assumption can boost learning performance if features are independent of each other; considering features according to the amount of information provides high accuracy and diversity of base learner models. According to experiment results, the algorithm that has been developed has the highest average classification accuracy rate across the 33 datasets. The highest and the lowest average classification accuracy rates have been found to be 89.80 and 78.03%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Detection of splicing forgery using differential evolution and wavelet decomposition.
- Author
-
Kashyap, Abhishek, Suresh, B, and Gupta, Hariom
- Subjects
FORGERY ,ALGORITHMS ,RANDOM noise theory ,DIFFERENTIAL evolution ,IMAGE processing ,IMAGE compression - Abstract
In this paper, we have proposed a computationally efficient algorithm to detect splicing (copy-create) forgery, our proposed method is developed by using differential evolution and wavelet decomposition, the differential evolution algorithm automatically generates customized parameter values of tampered images, and wavelet decomposition is used to process large-size images under block-based framework. Our proposed method is resilient to distortions, such as the addition of Gaussian noise, scaling and compression of the forged images. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
50. Driving Route Recommendation With Profit Maximization in Ride Sharing.
- Author
-
Huang, Longji, Huang, Jianbin, Xu, Yueshen, Zhao, Zhiqiang, and Zhang, Zhenghao
- Subjects
PROFIT maximization ,RIDESHARING ,SEARCH algorithms ,CITY traffic ,SIMULATED annealing ,ALGORITHMS ,TABU search algorithm ,EXPECTATION-maximization algorithms - Abstract
Due to the positive impact of ride sharing on urban traffic and environment, it has attracted a lot of research attention recently. However, most existing researches focused on the profit maximization or the itinerary minimization of drivers, only rare work has covered on adjustable price function and matching algorithm for the batch requests. In this paper, we propose a request matching algorithm and an adjustable price function that benefits drivers as well as passengers. Our request-matching algorithm consists of an exact search algorithm and a group search algorithm. The exact search algorithm consists of three steps. The first step is to prune some invalid groups according to the total number of passengers and the capacity of vehicles. The second step is to filter out all candidate groups according to the compatibility of requests in same group. The third step is to obtain the most profitable group by the adjustable price function, and recommend the most profitable group to drivers. In order to enhance the efficiency of the exact search algorithm, we further design an improved group search algorithm based on the idea of original simulated annealing. Extensive experimental results show that our method can improve the income of drivers, and reduce the expense of passengers. Meanwhile, ride sharing can also keep the utilization rate of seats 80%, driving distance is reduced by 30%. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.