178 results
Search Results
2. An optimal algorithm for 2-bounded delay buffer management with lookahead.
- Author
-
Kobayashi, Koji M.
- Subjects
- *
ALGORITHMS , *ONLINE algorithms , *DETERMINISTIC algorithms , *QUALITY of service , *COMPUTER science - Abstract
• We consider a variant of the online buffer management problem. • This variant is called the 2-bounded delay buffer management problem with lookahead. • We design an optimal online algorithm for this variant. The bounded delay buffer management problem, which was proposed by Kesselman et al. (STOC 2001 and SIAM Journal on Computing 33(3), 2004), is an online problem focusing on buffer management of a switch supporting Quality of Service (QoS). The problem definition is as follows: Packets arrive to a buffer over time and each packet is specified by the release time , deadline and value. An algorithm can transmit at most one packet from the buffer at each integer time and can gain its value as the profit if transmitting the packet by its deadline after its release time. The objective of this problem is to maximize the gained profit. We say that an instance of the problem is s -bounded if for any packet, an algorithm has at most s chances to transmit it. For any s ≥ 2 , Hajek (CISS 2001) showed that the competitive ratio of any deterministic algorithm is at least (1 + 5) / 2 ≥ 1.618. Recently, Veselý et al. (SODA 2019) designed an online algorithm matching the lower bound. Böhm et al. (ISAAC 2016 and Theoretical Computer Science, 2019) introduced the lookahead ability to an online algorithm. At a time t , the algorithm obtains information about packets arriving at time t + 1 , and showed that for s = 2 , there is an algorithm which achieves the competitive ratio of (− 1 + 13) / 2 ≤ 1.303. Also, they showed that the competitive ratio of any deterministic algorithm is at least (1 + 17) / 4 ≥ 1.280. In this paper, for the 2-bounded model with lookahead, we design an algorithm with a matching competitive ratio of (1 + 17) / 4. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Almost linear time algorithms for minsum k-sink problems on dynamic flow path networks.
- Author
-
Higashikawa, Yuya, Katoh, Naoki, Teruyama, Junichi, and Watase, Koji
- Subjects
- *
POLYNOMIAL time algorithms , *ALGORITHMS , *DATA structures , *COMPUTER science , *UNITS of time - Abstract
• Study the minsum k -sink problems on dynamic flow path networks for the confluent/non-confluent flow model. • Develop algorithms which run in almost linear time regardless of the number of sinks k. • Improve upon the previous results for the same problem with the confluent flow model. • Provide the first polynomial time algorithm for the problem with the non-confluent flow model. • The main theoretical contribution is to construct novel data structures for solving subproblems in polylogarithmic time. We address the facility location problems on dynamic flow path networks. A dynamic flow path network consists of an undirected path with positive edge lengths, positive edge capacities, and positive vertex weights. A path can be considered as a road, an edge length as the distance along the road and a vertex weight as the number of people at the site. An edge capacity limits the number of people that can enter the edge per unit time. In the dynamic flow network, given particular points on edges or vertices, called sinks , all the people evacuate from the vertices to the sinks as quickly as possible. The problem is to find the location of sinks on a dynamic flow path network in such a way that the aggregate evacuation time (i.e., the sum of evacuation times for all the people) to sinks is minimized. We consider two models of the problem: the confluent flow model and the non-confluent flow model. In the former model, the way of evacuation is restricted so that all the people at a vertex have to evacuate to the same sink, and in the latter model, there is no such restriction. In this paper, for both the models, we develop algorithms which run in almost linear time regardless of the number of sinks. It should be stressed that for the confluent flow model, our algorithm improves upon the previous result by Benkoczi et al. [Theoretical Computer Science, 2020], and one for the non-confluent flow model is the first polynomial time algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. A novel algorithmic construction for deductions of categorical polysyllogisms by Carroll's diagrams.
- Author
-
Senturk, Ibrahim, Gursoy, Necla Kircali, Oner, Tahsin, and Gursoy, Arif
- Subjects
- *
ARTIFICIAL intelligence , *ALGORITHMS , *AUTHORSHIP in literature , *COMPUTER science , *SYLLOGISM - Abstract
In this work, with the help of a calculus system syllogistic logic with Carroll's diagrams (SLCD), we construct a useful algorithm for the possible deductions of polysyllogisms (soriteses). This algorithm makes a general deduction in categorical syllogisms with the help of diagrams to depict each proposition of polysyllogisms. The developed calculus system PolySLCD (PSLCD) is used to allow a formal deduction from premises set by comprising synchronically biliteral and triliteral diagrammatical appearance and simple algorithmic nature. This algorithm can be used to deduce new conclusions, step by step, through recursive conclusion sets that are obtained from premises of categorical polysyllogisms. The fundamental contributions of this paper are accurately deducing conclusions from sets corresponding to given premises as exact human reasoning using a single algorithm and designing this algorithm based on SLCD. Therefore, it is more suitable for computer-aided solution. Since the algorithm is set-based, it is a novel algorithm in the literature and it can easily contribute to the researchers using polysyllogisms in different scientific branches, such as computer science, decision-making systems and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Subspace clustering based on latent low rank representation with Frobenius norm minimization.
- Author
-
Yu, Song and Yiquan, Wu
- Subjects
- *
FROBENIUS manifolds , *FROBENIUS algebras , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
The problem of subspace clustering which refers to segmenting a collection of data samples approximately drawn from a union of linear subspaces is considered in this paper. Among existing subspace clustering algorithms, low rank representation (LRR) based subspace clustering is a very powerful method and has demonstrated that its performance is good. Latent low rank representation (LLRR) subspace clustering algorithm is an improvement of the original LRR algorithm when the observed data samples are insufficient. The clustering accuracy of LLRR is higher than that of LRR. Recently, Frobenius norm minimization based LRR algorithm has been proposed and its clustering accuracy is higher than that of LRR demonstrating the effectiveness of Frobenius norm as another convex surrogate of the rank function. Combining LLRR and Frobenius norm, a new low rank representation subspace clustering algorithm is proposed in this paper. The nuclear norm in the LLRR algorithm is replaced by the Frobenius norm. The resulting optimization problem is solved via alternating direction method of multipliers (ADMM). Experimental results show that compared with LRR, LLRR and several other state-of-the-art subspace clustering algorithms, the proposed algorithm can get higher clustering accuracy. Compared with LLRR, the running time of the proposed algorithm is reduced significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Collaborative linear manifold learning for link prediction in heterogeneous networks.
- Author
-
Liu, JiaHui, Jin, Xu, Hong, YuXiang, Liu, Fan, Chen, QiXiang, Huang, YaLou, Liu, MingMing, Xie, MaoQiang, and Sun, FengChi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *MANIFOLDS (Mathematics) , *TOPOLOGY - Abstract
Link prediction in heterogeneous networks aims at predicting missing interactions between pairs of nodes with the help of the topology of the target network and interconnected auxiliary networks. It has attracted considerable attentions from both computer science and bioinformatics communities in the recent years. In this paper, we introduce a novel Collaborative Linear Manifold Learning (CLML) algorithm. It can optimize the consistency of nodes similarities by collaboratively using the manifolds embedded between the target network and the auxiliary network. The experiments on four benchmark datasets have demonstrated the outstanding advantages of CLML, not only in the high prediction performance compared to baseline methods, but also in the capability to predict the unknown interactions in the target networks accurately and effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A simple linear time algorithm for the locally connected spanning tree problem on maximal planar chordal graphs.
- Author
-
Calamoneri, Tiziana, Dell'Orefice, Matteo, and Monti, Angelo
- Subjects
- *
SPANNING trees , *SUBGRAPHS , *PLANAR graphs , *ALGORITHMS , *COMPUTER science - Abstract
Abstract A locally connected spanning tree (LCST) T of a graph G is a spanning tree of G such that, for each node, its neighborhood in T induces a connected subgraph in G. The problem of determining whether a graph contains an LCST or not has been proved to be NP-complete, even if the graph is planar or chordal. The main result of this paper is a simple linear time algorithm that, given a maximal planar chordal graph, determines in linear time whether it contains an LCST or not, and produces one if it exists. We give an analogous result for the case when the input graph is a maximal outerplanar graph. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Construction method of concept lattice based on improved variable precision rough set.
- Author
-
Zhang, Ruiling, Xiong, Shengwu, and Chen, Zhong
- Subjects
- *
COMPUTER science , *ARTIFICIAL neural networks , *DISCRIMINANT analysis , *ALGORITHMS , *PARAMETER estimation - Abstract
This paper mainly focuses on how to construct concept lattice effectively and efficiently based on improved variable precision rough set. On the basis of preprocessing formal concept, one algorithm that can determine the value range of variable precision parameter β according to the approximate classification quality is proposed. An improved β -upper and lower distribution attribute reduction algorithm is also proposed based on the improved variable precision rough set, the algorithm can be used for attribute reduction on the original data of the concept lattice, and to eliminate the redundant knowledge or noises of the formal context. For the reduced formal context, the paper combines the concept construction algorithm with an improved rule acquisition algorithm seamlessly, and proposes a novel approach of concept lattice construction based on improved variable precision rough set. Finally, a concept lattice generation prototype system is developed, this paper also performs comprehensive experiments, and the effectiveness of the improved algorithm is proved through the experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. How to catch [formula omitted]-heavy-hitters on sliding windows.
- Author
-
Braverman, Vladimir, Gelles, Ran, and Ostrovsky, Rafail
- Subjects
- *
ALGORITHMS , *HISTOGRAMS , *COMPUTER science , *APPLIED mathematics , *INFORMATION science , *MATHEMATICAL statistics , *MATHEMATICAL programming - Abstract
Finding heavy-elements (heavy-hitters) in streaming data is one of the central, and well-understood tasks. Despite the importance of this problem, when considering the sliding windows model of streaming (where elements eventually expire) the problem of finding L 2 -heavy elements has remained completely open despite multiple papers and considerable success in finding L 1 -heavy elements. Since the L 2 -heavy element problem doesn't satisfy certain conditions, existing methods for sliding windows algorithms, such as smooth histograms or exponential histograms are not directly applicable to it. In this paper, we develop the first polylogarithmic-memory algorithm for finding L 2 -heavy elements in the sliding window model. Our technique allows us not only to find L 2 -heavy elements, but also heavy elements with respect to any L p with 0 < p ≤ 2 on sliding windows. By this we completely “close the gap” and resolve the question of finding L p -heavy elements in the sliding window model with polylogarithmic memory, since it is well known that for p > 2 this task is impossible. We demonstrate a broader applicability of our method on two additional examples: we show how to obtain a sliding window approximation of the similarity of two streams, and of the fraction of elements that appear exactly a specified number of times within the window (the α -rarity problem). In these two illustrative examples of our method, we replace the current expected memory bounds with worst case bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Food image classification using local appearance and global structural information.
- Author
-
Nguyen, Duc Thanh, Zong, Zhimin, Ogunbona, Philip O., Probst, Yasmine, and Li, Wanqing
- Subjects
- *
IMAGE processing , *IMAGE analysis , *COMPUTER science , *APPLIED mathematics , *ALGORITHMS - Abstract
Abstract: This paper proposes food image classification methods exploiting both local appearance and global structural information of food objects. The contribution of the paper is threefold. First, non-redundant local binary pattern (NRLBP) is used to describe the local appearance information of food objects. Second, the structural information of food objects is represented by the spatial relationship between interest points and encoded using a shape context descriptor formed from those interest points. Third, we propose two methods of integrating appearance and structural information for the description and classification of food images. We evaluated the proposed methods on two datasets. Experimental results verified that the combination of local appearance and structural features can improve classification performance. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
11. Image medium similarity measure and its applications.
- Author
-
Zhou, Ningning, Hong, Long, and Zhang, Shaobai
- Subjects
- *
ALGORITHMS , *MATHEMATICAL models , *IMAGE processing , *SET theory , *APPLIED mathematics , *COMPUTER science - Abstract
Abstract: Similarity measure plays an important role in many image processing fields. This paper introduces the medium mathematic systems, and establishes a novel image medium similarity measure between two pixels and that of between two image sets based on the measure of medium truth degree. Moreover, an image edge detection algorithm, an image matching algorithm and an image medium fidelity measure method governed by the medium similarity measure are discussed in this paper. Experimental results show that the proposed similarity measure is effective. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
12. Coupled local–global adaptation for multi-source transfer learning.
- Author
-
Liu, Jieyan, Li, Jingjing, and Lu, Ke
- Subjects
- *
MACHINE learning , *ROBUST control , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
This paper presents a novel unsupervised multi-source domain adaptation approach, named as coupled local–global adaptation (CLGA). At the global level, in order to maximize the adaptation ability, CLGA regards multiple domains as a unity, and jointly mitigates the gaps of both marginal and conditional distributions between source and target dataset. At the local level, with the intention of maximizing the discriminative ability, CLGA investigates the relationship among distinctive domains, and exploits both class and domain manifold structures embedded in data samples. We formulate both local and global adaptation in a concise optimization problem, and further derive an analytic solution for the objective function. Extensive evaluations verify that CLGA performs better than several existing methods not only in multi-source adaptation tasks but also in single source scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. A new greedy algorithm for sparse recovery.
- Author
-
Wang, Qian and Qu, Gangrong
- Subjects
- *
COMPRESSED sensing , *SIGNAL sampling , *ALGORITHMS , *ARTIFICIAL neural networks , *COMPUTER science - Abstract
Compressed sensing (CS) has been one of the great successes of applied mathematics in the last decade. This paper proposes a new method, combining the advantage of the Compressive Sampling Matching Pursuit (CoSaMP) algorithm and the Quasi–Newton Iteration Projection (QNIP) algorithm, for the recovery of sparse signal from underdetermined linear systems. To get the new algorithm, Quasi–Newton Projection Pursuit (QNPP), the least-squares technique in CoSaMP is used to accelerate convergence speed and QNIP is modified slightly. The convergence rate of QNPP is studied, under a certain condition on the restricted isometry constant of the measurement matrix, which is smaller than that of QNIP. The fast version of QNPP is also proposed which uses the Richardson’s iteration to reduce computation time. The numerical results show that the proposed algorithms have higher recovery rate and faster convergence speed than existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. The Receptor Density Algorithm.
- Author
-
Owens, Nick D.L., Greensted, Andrew, Timmis, Jon, and Tyrrell, Andy
- Subjects
- *
ALGORITHMS , *APPLICATION software , *ANOMALY detection (Computer security) , *IMMUNE system , *COMPUTER performance , *COMPUTER science - Abstract
Abstract: This paper describes the biological and theoretical foundations of a new Artificial Immune System the Receptor Density Algorithm. The algorithm is developed with inspiration from T cell signalling processes and has application in anomaly detection. Connections between the Receptor Density Algorithm and kernel density estimation with exponential smoothing are demonstrated. Finally, the paper evaluates the algorithm’s performance on two types of anomaly detection problem. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
15. Tangible images of real life scenes.
- Author
-
Zhang, Xingzi, Goesele, Michael, and Sourin, Alexei
- Subjects
- *
COMPUTER science , *ENGINEERING , *ALGORITHMS , *POLYGONS - Abstract
Haptic technologies allow for adding a new “touching” modality into virtual scenes. 3D reconstruction of a real life scene results, however, often in millions of polygons which cannot be simultaneously visualized and haptically rendered. In this paper, we propose a way of haptic interaction with real life scenes where multiple original images of the real scenes are augmented with reconstructed polygon meshes. We present our solution to the problems of haptic model alignment with the images and interactive haptic rendering of large polygon meshes with reconstruction artifacts. In particular, the presented collision detection algorithm is not restricted by any hypothesis and robust enough to support smooth interaction with millions of polygons. The feasibility and usability of the proposed solution is evaluated in a user study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Lessons from collecting a million biometric samples.
- Author
-
Phillips, P. Jonathon, Flynn, Patrick J., and Bowyer, Kevin W.
- Subjects
- *
BIOMETRIC identification , *COMPUTER science , *GESTURE , *HUMAN facial recognition software , *ALGORITHMS - Abstract
Over the past decade, independent evaluations have become commonplace in many areas of experimental computer science, including face and gesture recognition. A key attribute of many successful independent evaluations is a curated data set. Desired aspects associated with these data sets include appropriateness to the experimental design, a corpus size large enough to allow statistically rigorous characterization of results, and the availability of comprehensive metadata that allow inferences to be made on various data set attributes. In this paper, we review a ten-year biometric sampling effort that enabled the creation of several key biometrics challenge problems. We summarize the design and execution of data collections, identify key challenges, and convey some lessons learned. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. An excursion in reaction systems: From computer science to biology
- Author
-
Corolli, Luca, Maj, Carlo, Marini, Fabrizio, Besozzi, Daniela, and Mauri, Giancarlo
- Subjects
- *
BIOCHEMISTRY , *GENE expression , *COMPUTER science , *ITERATIVE methods (Mathematics) , *ALGORITHMS , *MATHEMATICAL models - Abstract
Abstract: Reaction systems are a formal model based on the regulation mechanisms of facilitation and inhibition between biochemical reactions, which underlie the functioning of living cells. The aim of this paper is to explore the expressive power of reaction systems as a modeling framework, showing how their basic assumptions and properties can be exploited to formalize computer science and biology oriented problems. In this view, we first provide a reaction-based description of an iterative algorithm to solve the Tower of Hanoi puzzle. Then, we show how the regulation of gene expression in the lac operon, involved in the metabolism of lactose in Escherichia coli cells, can be formalized in terms of reaction systems. Finally, we present a method to derive, given a reaction system with reactions, a functionally equivalent system with reactions using simplification methods of boolean expressions. Some final remarks and directions for future research conclude the paper. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
18. Preference-oriented fixed-priority scheduling for periodic real-time tasks.
- Author
-
Begam, Rehana, Xia, Qin, Zhu, Dakai, and Aydin, Hakan
- Subjects
- *
REAL-time control , *ALGORITHMS , *MONOTONIC functions , *COMPUTER science - Abstract
Traditionally, real-time scheduling algorithms prioritize tasks solely based on their timing parameters and cannot effectively handle tasks that have different execution preferences . In this paper, for a set of periodic real-time tasks running on a single processor, where some tasks are preferably executed as soon as possible (ASAP) and others as late as possible (ALAP) , we investigate Preference-Oriented Fixed-Priority (POFP) scheduling techniques. First, based on Audsley’s Optimal Priority Assignment (OPA) , we study a Preference Priority Assignment (PPA) scheme that attempts to assign ALAP (ASAP) tasks lower (higher) priorities, whenever possible. Then, by considering the non-work-conserving strategy, we exploit the promotion times of ALAP tasks and devise an online dual-queue based POFP scheduling algorithm. Basically, with the objective of fulfilling the execution preferences of all tasks, the POFP scheduler retains ALAP tasks in the delay queue until their promotion times while putting ASAP tasks into the ready queue right after their arrivals. In addition, to further expedite (delay) the executions of ASAP (ALAP) tasks using system slack, runtime techniques based on dummy and wrapper tasks are investigated. The proposed schemes are evaluated through extensive simulations. The results show that, compared to the classical fixed-priority Rate Monotonic Scheduling (RMS) algorithm, the proposed priority assignment scheme and POFP scheduler can achieve significant improvement in terms of fulfilling the execution preferences of both ASAP and ALAP tasks, which can be further enhanced at runtime with the wrapper-task based slack management technique. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
19. A large population size can be unhelpful in evolutionary algorithms
- Author
-
Chen, Tianshi, Tang, Ke, Chen, Guoliang, and Yao, Xin
- Subjects
- *
ALGORITHMS , *POLYNOMIALS , *EXPONENTIAL functions , *COMPUTER science , *POPULATION , *PERFORMANCE evaluation - Abstract
Abstract: The utilization of populations is one of the most important features of evolutionary algorithms (EAs). There have been many studies analyzing the impact of different population sizes on the performance of EAs. However, most of such studies are based on computational experiments, except for a few cases. The common wisdom so far appears to be that a large population would increase the population diversity and thus help an EA. Indeed, increasing the population size has been a commonly used strategy in tuning an EA when it did not perform as well as expected for a given problem. He and Yao (2002) showed theoretically that for some problem instance classes, a population can help to reduce the runtime of an EA from exponential to polynomial time. This paper analyzes the role of population further in EAs and shows rigorously that large populations may not always be useful. Conditions, under which large populations can be harmful, are discussed in this paper. Although the theoretical analysis was carried out on one multimodal problem using a specific type of EAs, it has much wider implications. The analysis has revealed certain problem characteristics, which can be either the problem considered here or other problems, that lead to the disadvantages of large population sizes. The analytical approach developed in this paper can also be applied to analyzing EAs on other problems. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
20. Applying IsRewritten criterion on Buchberger algorithm
- Author
-
Hashemi, Amir and M.-Alizadeh, Benyamin
- Subjects
- *
COMPUTER algorithms , *MATHEMATICAL proofs , *MATHEMATICAL analysis , *PERFORMANCE evaluation , *COMPUTER science , *ALGORITHMS - Abstract
Abstract: Faugère’s algorithm is one of the fastest algorithms to compute Gröbner bases. It uses two criteria namely the criterion and the IsRewritten criterion to detect the useless critical pairs (see Faugère (2002) ). The IsRewritten criterion has been used in the algorithm, but it has not been explicitly declared in the related paper. In this paper, we give first a complete proof for the IsRewritten criterion and then using a signature structure on Buchberger’s algorithm, we apply this criterion on Buchberger’s algorithm. We have implemented a new algorithm (based on the above results) in Maple to compute a Gröbner basis of a general ideal and we evaluate its performance via some examples. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
21. Ranking uncertain sky: The probabilistic top-k skyline operator
- Author
-
Zhang, Ying, Zhang, Wenjie, Lin, Xuemin, Jiang, Bin, and Pei, Jian
- Subjects
- *
ELECTRONIC data processing , *INFORMATION storage & retrieval systems , *PROBABILITY theory , *DENSITY functionals , *ALGORITHMS , *COMPUTER science , *ALGEBRA - Abstract
Abstract: Many recent applications involve processing and analyzing uncertain data. In this paper, we combine the feature of top-k objects with that of skyline to model the problem of top-k skyline objects against uncertain data. The problem of efficiently computing top-k skyline objects on large uncertain datasets is challenging in both discrete and continuous cases. In this paper, firstly an efficient exact algorithm for computing the top-k skyline objects is developed for discrete cases. To address applications where each object may have a massive set of instances or a continuous probability density function, we also develop an efficient randomized algorithm with an guarantee. Moreover, our algorithms can be immediately extended to efficiently compute p-skyline; that is, retrieving the uncertain objects with skyline probabilities above a given threshold. Our extensive experiments on synthetic and real data demonstrate the efficiency of both algorithms and the randomized algorithm is highly accurate. They also show that our techniques significantly outperform the existing techniques for computing p-skyline. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
22. Suffix trees for inputs larger than main memory
- Author
-
Barsky, Marina, Stege, Ulrike, and Thomo, Alex
- Subjects
- *
COMPUTER storage devices , *DATA structures , *ALGORITHMS , *NUCLEOTIDE sequence , *HUMAN genome , *RANDOM access memory , *COMPUTER science , *ELECTRONIC data processing - Abstract
Abstract: A suffix tree is a fundamental data structure for string searching algorithms. Unfortunately, when it comes to the use of suffix trees in real-life applications, the current methods for constructing suffix trees do not scale for large inputs. As suffix trees are larger than the input sequences and quickly outgrow the main memory, the first attempts at building large suffix trees focused on algorithms which avoid massive random access to the trees being built. However, all the existing practical algorithms perform random access to the input string, thus requiring in essence that the input be small enough to be kept in main memory. The constantly growing pool of string data, especially biological sequences, requires us to build suffix trees for much larger strings. We are the first to present an algorithm which is able to construct suffix trees for input sequences significantly larger than the size of the available main memory. Both the input string and the suffix tree are kept on disk and the algorithm is designed to avoid multiple random I/Os to both of them. 1 [1] The preliminary version of this paper was presented as a short paper at CIKM 2009 conference. As a proof of concept, we show that our method allows to build the suffix tree for 12GB of real DNA sequences in 26h on a single machine with 2GB of RAM. This input is four times the size of the Human Genome, and the construction of suffix trees for inputs of such magnitude was never reported before. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
23. Bicriteria scheduling concerned with makespan and total completion time subject to machine availability constraints
- Author
-
Huo, Yumei and Zhao, Hairong
- Subjects
- *
COMPUTER scheduling , *MATHEMATICAL optimization , *PARALLEL computers , *ALGORITHMS , *COMPUTER science , *CRITERION (Theory of knowledge) - Abstract
Abstract: In the past, research on multiple criteria scheduling assumes that the number of available machines is fixed during the whole scheduling horizon and research on scheduling with limited machine availability assumes that there is only a single criterion that needs to be optimized. Both assumptions may not hold in real life. In this paper we simultaneously consider both bicriteria scheduling and scheduling with limited machine availability. We focus on two parallel machines’ environment and the goal is to find preemptive schedules to optimize both makespan and total completion time subject to machine availability. Our main contribution in this paper is that we showed three bicriteria scheduling problems are in P by providing polynomial time optimal algorithms. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
24. Rules admissible in transitive temporal logic , sufficient condition
- Author
-
Rybakov, Vladimir
- Subjects
- *
MATHEMATICAL logic , *INFERENCE (Logic) , *COMPUTER science , *DECISION making , *FINITE model theory - Abstract
Abstract: The paper 1 [1] Supported by Engineering and Physical Sciences Research Council (EPSRC), U.K., grant EP/F014406/1. develops a technique for computation inference rules admissible in temporal logic . The problem whether there exists an algorithm recognizing inference rules admissible in is a long-standing open problem. The logic has neither the extension property nor the co-cover property which previously were central instruments for construction decision algorithms for admissibility in modal logics (e.g. reflexive and transitive modal logic ). Our paper uses a linear-compression property, a zigzag-ray property and a zigzag stretching property which hold for . The main result of the paper is a sufficient condition for admissibility inference rules in . It is shown that all rules which are valid in special finite models (with an effective upper bound on size) must be admissible in . [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
25. Space–time tradeoffs in negative cycle detection – An empirical analysis of the Stressing Algorithm
- Author
-
Subramani, K., Tauras, C., and Madduri, K.
- Subjects
- *
COMPUTER science , *EMPIRICAL research , *ALGORITHMS , *COST analysis , *OPERATIONS research - Abstract
Abstract: This paper discusses space–time tradeoffs associated with algorithms for the problem of detecting negative cost cycles in networks (NCCD). NCCD is one of the more ubiquitous problems in computer science and operations research, with applications ranging from program verification and real-time scheduling to image segmentation and shortest path identification. Typical algorithmic analysis, whether theoretical or empirical, focuses almost exclusively on the running time of the algorithm. However, there exist applications in which space is just as important a parameter as time is. This is especially true when the problem instances are very large, as is the case in program verification. Consequently, an algorithm that minimizes running time while ignoring the space overhead may be impractical. In this paper, we analyze a number of the more common algorithms for NCCD from the perspectives of both time and space, with a view towards providing a space–time tradeoff for the practitioner. All the algorithms discussed in this paper (with the exception of Network Simplex) run in time on a network with m arcs and n vertices; however, their space requirements range from (Stressing Algorithm) to (all the Bellman–Ford and Network Simplex variants). Our empirical results demonstrate that in the cases where space is paramount, the Stressing Algorithm is a useful alternative to the Bellman–Ford variants. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
26. Executability of scenarios in Petri nets
- Author
-
Lorenz, Robert, Juhás, Gabriel, Bergenthum, Robin, Desel, Jörg, and Mauser, Sebastian
- Subjects
- *
PETRI nets , *SET theory , *ALGORITHMS , *MATHEMATICAL transformations , *SEMANTICS , *COMPUTER science - Abstract
Abstract: In this paper, we show that it can be tested in polynomial time as to whether a scenario is an execution of a Petri net. This holds for a wide variety of Petri net classes, ranging from elementary nets to general inhibitor nets. Scenarios are given by causal structures expressing causal dependencies and concurrency among events. In the case of elementary nets and of place/transition nets, such causal structures are partial orders among transition occurrences. For several extended Petri net classes, the extension of partial orders to stratified order structures is considered. The algorithms are based on the representation of the non-sequential behavior of Petri nets by so-called token flow functions and a characterization of Petri net executions called token flow property. This property allows nontrivial transformations into flow optimization problems, which can be solved in polynomial time. The paper is a revised, consolidated and extended version of the conference papers [G. Juhás, R. Lorenz, J. Desel, Can I execute my scenario in your net?, in: G. Ciardo, P. Darondeau (Eds.), ICATPN, in: Lecture Notes in Computer Science, Springer, 2005, pp. 289–308; R. Lorenz, S. Mauser, R. Bergenthum, Testing the Executability of Scenarios in General Inhibitor Nets, in: ACSD, IEEE Computer Society, 2007, pp. 167–176] and includes parts of the habilitation thesis [R. Lorenz, Szenario-basierte Verifikation und Synthese von Perinetzen: Theorie und Anwendungen, Habilitation, 2006]. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
27. Efficient algorithms for two generalized 2-median problems and the group median problem on trees
- Author
-
Chan, Chi-Yuan, Ku, Shan-Chyun, Lu, Chi-Jen, and Wang, Biing-Feng
- Subjects
- *
LOCATION problems (Programming) , *TREE graphs , *ALGORITHMS , *COMPUTER science , *GRAPH theory - Abstract
Abstract: The -median problem on a tree is to find a set of vertices on that minimizes the sum of distances from ’s vertices to . In this paper, we study two generalizations of the 2-median problem, which are obtained by imposing constraints on the two vertices selected as a 2-median: one is to limit their distance while the other is to limit their eccentricity. Previously, both the best upper bounds of these two generalizations were [A. Tamir, D. Perez-Brito, J.A. Moreno-Perez, A polynomial algorithm for the -centdian problem on a tree, Networks 32 (1998) 255–262; B.-F. Wang, S.-C. Ku, K.-H. Shi, Cost-optimal parallel algorithms for the tree bisector problem and applications, IEEE Transactions on Parallel and Distributed Systems 12 (9) (2001) 888–898]. In this paper, we solve both in time. We also study cases when linear time algorithms exist for the two generalizations. For example, we solve both in linear time when edge lengths and vertex weights are all polynomially bounded integers. Furthermore, we consider the relaxation of the two generalized problems by allowing 2-medians on any position of edges, instead of just on vertices, and we give -time algorithms for them. A problem, named the tree marker problem, arises several times in our approaches to the two generalized 2-median problems, and we give an -time algorithm for this problem. We also use this algorithm to speedup an algorithm of Gupta and Punnen [S.K. Gupta, A.P. Punnen, Group center and group median of a tree, European Journal of Operational Research 65 (1993) 400–406] for the group median problem, improving the running time from to , where is the number of groups in the input. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
28. Formal modeling, performance estimation, and model checking of wireless sensor network algorithms in Real-Time Maude
- Author
-
Ölveczky, Peter Csaba and Thorvaldsen, Stian
- Subjects
- *
PROGRAMMING languages , *SENSOR networks , *MATHEMATICAL models , *REAL-time programming , *ALGORITHMS , *SIMULATION methods & models , *COMPUTER science - Abstract
Abstract: The purpose of this paper is to show how the rewriting-logic-based Real-Time Maude language and tool can be used to formally model, simulate, and model check advanced wireless sensor network (WSN) algorithms. This is done by first proposing some general techniques for modeling and analyzing WSN algorithms, and then by showing how these techniques have been applied to the modeling, performance estimation, and model checking of the state-of-the-art optimal geographical density control (OGDC) density control algorithm. Wireless sensor networks in general, and the OGDC algorithm in particular, pose many challenges to their formal specification and analysis, including novel communication forms, spatial entities, time-dependent and probabilistic features, and the need to analyze both correctness and performance. We focus on Monte Carlo simulations to evaluate the performance of OGDC. Extensive simulations with up to 800 sensor nodes, and comparison with the ns-2 simulations of OGDC, indicate that Real-Time Maude simulations provide fairly accurate performance estimates of WSN algorithms. As a consequence, simulating the high-level Real-Time Maude model of a WSN algorithm eliminates the need for implementing it on a simulation tool to get a faithful estimate of its performance, while providing much greater flexibility in defining the appropriate simulation scenario; in addition, Real-Time Maude model checking can search for “corner case” bugs and evaluate best-case and worst-case performance. Some of the techniques presented in this paper are also used in an ongoing analysis effort of another state-of-the-art WSN algorithm. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
29. Local spreading algorithms for autonomous robot systems
- Author
-
Cohen, Reuven and Peleg, David
- Subjects
- *
ALGORITHMS , *ROBOTICS , *COMPUTER science , *SCIENCE - Abstract
Abstract: This paper studies local algorithms for autonomous robot systems, namely, algorithms that use only information of the positions of a bounded number of their nearest neighbors. The paper focuses on the spreading problem. It defines measures for the quality of spreading, presents a local algorithm for the one-dimensional spreading problem, proves its convergence to the equally spaced configuration and discusses its convergence rate in the synchronous and semi-synchronous settings. It then presents a local algorithm achieving the exact equally spaced configuration in finite time in the synchronous setting, and proves it is time optimal for local algorithms. Finally, the paper also proposes a possible algorithm for the two-dimensional case and presents partial simulation results of its effectiveness. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
30. Some classes of term rewriting systems inferable from positive data
- Author
-
Krishna Rao, M.R.K.
- Subjects
- *
ALGEBRA , *DATA , *COMPUTER science , *ALGORITHMS - Abstract
Abstract: In this paper, we study the inferability of term rewriting systems (trss, for short) from positive examples alone. Two classes of trss inferable from positive data are presented, namely, simple flat trss and linear-bounded trss. These classes of trss are rich enough to include many divide-and-conquer programs 1 [1] We use ‘programs’ and ‘systems’ interchangeably as trss are very similar to functional programs. In fact, the trss considered in this paper follow the so called constructor discipline and are essentially functional programs. like addition, doubling, logarithm, tree-count, list-count, split, append, reverse, etc. The classes of simple flat trss and linear-bounded trss are incomparable, i.e., there are functions that can be computed by simple flat trss but not by linear-bounded trss and vice versa. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
31. A tree-projection-based algorithm for multi-label recurrent-item associative-classification rule generation
- Author
-
Rak, Rafal, Kurgan, Lukasz, and Reformat, Marek
- Subjects
- *
ALGORITHMS , *COMPUTER science , *COMPUTER logic , *SYSTEMS development , *COMPUTER programming , *INFORMATION technology , *INFORMATION science - Abstract
Abstract: Associative-classification is a promising classification method based on association-rule mining. Significant amount of work has already been dedicated to the process of building a classifier based on association rules. However, relatively small amount of research has been performed in association-rule mining from multi-label data. In such data each example can belong, and thus should be classified, to more than one class. This paper aims at the most demanding, with respect to computational cost, part in associative-classification, which is efficient generation of association rules. This task can be achieved using different frequent pattern mining methods. In this paper, we propose a new method that is based on the state-of-the-art tree-projection-based frequent pattern mining algorithm. This algorithm is modified to improve its efficiency and extended to accommodate the multi-label recurrent-item associative-classification rule generation. The proposed algorithm is tested and compared with A priori-based associative-classification rule generator on two large datasets. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
32. A tight analysis of the Katriel–Bodlaender algorithm for online topological ordering
- Author
-
Liu, Hsiao-Fei and Chao, Kun-Mao
- Subjects
- *
ALGORITHMS , *COMPUTER science , *SYSTEMS design , *COMPUTER programming , *INFORMATION science - Abstract
Abstract: Katriel and Bodlaender [Irit Katriel, Hans L. Bodlaender, Online topological ordering, ACM Transactions on Algorithms 2 (3) (2006) 364–379] modify the algorithm proposed by Alpern et al. [Bowen Alpern, Roger Hoover, Barry K. Rosen, Peter F. Sweeney, F. Kenneth Zadeck, Incremental evaluation of computational circuits, in: Proceedings of the First Annual ACM–SIAM Symposium on Discrete Algorithms (SODA), 1990, pp. 32–42] for maintaining the topological order of the nodes of a directed acyclic graph while inserting edges and prove that their algorithm runs in time and has an lower bound. In this paper, we give a tight analysis of their algorithm by showing that it runs in time 1 [1] In this paper, we assume that . In fact, our analysis can be easily extended to prove that the algorithm runs in time without the assumption . . [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
33. A simpler analysis of Burrows–Wheeler-based compression
- Author
-
Kaplan, Haim, Landau, Shir, and Verbin, Elad
- Subjects
- *
CODING theory , *DATA compression (Telecommunication) , *DATA compression , *COMPUTER science , *ALGORITHMS - Abstract
In this paper, we present a new technique for worst-case analysis of compression algorithms which are based on the Burrows–Wheeler Transform. We mainly deal with the algorithm proposed by Burrows and Wheeler in their first paper on the subject [M. Burrows, D.J. Wheeler, A block sorting lossless data compression algorithm, Technical Report 124, Digital Equipment Corporation, Palo Alto, California, 1994], called bw0. This algorithm consists of the following three essential steps: (1) Obtain the Burrows–Wheeler Transform of the text, (2) Convert the transform into a sequence of integers using the move-to-front algorithm, (3) Encode the integers using Arithmetic code or any order-0 encoding (possibly with run-length encoding). We achieve a strong upper bound on the worst-case compression ratio of this algorithm. This bound is significantly better than bounds known to date and is obtained via simple analytical techniques. Specifically, we show that for any input string , and , the length of the compressed string is bounded by where is the th order empirical entropy, is a constant depending only on and on the size of the alphabet, and is the standard zeta function. As part of the analysis, we prove a result on the compressibility of integer sequences, which is of independent interest. Finally, we apply our techniques to prove a worst-case bound on the compression ratio of a compression algorithm based on the Burrows–Wheeler Transform followed by distance coding, for which worst-case guarantees have never been given. We prove that the length of the compressed string is bounded by . This bound is better than the bound we give for bw0. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
34. The bipanconnectivity and -panconnectivity of the folded hypercube
- Author
-
Fang, Jywe-Fei
- Subjects
- *
GRAPH theory , *ALGEBRA , *BIPARTITE graphs , *COMPUTER science , *INFORMATION technology - Abstract
Abstract: The interconnection network considered in this paper is the folded hypercube that is an attractive variance of the well-known hypercube. The folded hypercube is superior to the hypercube in many criteria, such as diameter, connectivity and fault diameter. In this paper, we study the path embedding aspects, bipanconnectivity and -panconnectivity, of the -dimensional folded hypercube. A bipartite graph is bipanconnected if each pair of vertices and are joined by the bipanconnected paths that include a path of each length satisfying and is even, where is the number of vertices, and denotes the shortest distance between and . A graph is -panconnected if each pair of vertices and are joined by the paths that include a path of each length ranging from to . In this paper, we introduce a new graph called the Path-of-Ladders. By presenting algorithms to embed the Path-of-Ladders into the folded hypercube, we show that the -dimensional folded hypercube is bipanconnected for is an odd number. We also show that the -dimensional folded hypercube is strictly -panconnected for is an even number. That is, each pair of vertices are joined by the paths that include a path of each length ranging from to ; and the value reaches the lower bound of the problem. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
35. A novel solution for maze traversal problems using artificial neural networks
- Author
-
Srinivasan, S., Mital, D.P., and Haque, S.
- Subjects
- *
ARTIFICIAL neural networks , *DIGITAL communications , *COMPUTER science , *ALGORITHMS - Abstract
Abstract: In this paper we have addressed the problem of finding a path through a maze of a given size. The traditional ways of finding a path through a maze employ recursive algorithms in which unwanted or non-paths are eliminated in a recursive manner. Neural networks with their parallel and distributed nature of processing seem to provide a natural solution to this problem. We present a biologically inspired solution using a two level hierarchical neural network for the mapping of the maze as also the generation of the path if it exists. For a maze of size S the amount of time it takes would be a function of S (O(S)) and a shortest path (if more than one path exists) could be found in around S cycles where each cycle involves all the neurons doing their processing in a parallel manner. The solution presented in this paper finds all valid paths and a simple technique for finding the shortest path amongst them is also given. The results are very encouraging and more applications of the network setup used in this report are currently being investigated. These include synthetic modeling of biological neural mechanisms, traversal of decision trees, modeling of associative neural networks (as in relating visual and auditory stimuli of a given phenomenon) and surgical micro-robot trajectory planning and execution. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
36. Harmony search: Current studies and uses on healthcare systems.
- Author
-
Abdulkhaleq, Maryam T., Rashid, Tarik A., Alsadoon, Abeer, Hassan, Bryar A., Mohammadi, Mokhtar, Abdullah, Jaza M., Chhabra, Amit, Ali, Sazan L., Othman, Rawshan N., Hasan, Hadil A., Azad, Sara, Mahmood, Naz A., Abdalrahman, Sivan S., Rasul, Hezha O., Bacanin, Nebojsa, and Vimal, S.
- Subjects
- *
CURIOSITY , *FLEXIBLE structures , *MEDICAL care , *COMPUTER science , *EVOLUTIONARY algorithms , *SEARCH algorithms , *ALGORITHMS , *LONGITUDINAL method - Abstract
One of the popular metaheuristic search algorithms is Harmony Search (HS). It has been verified that HS can find solutions to optimization problems due to its balanced exploratory and convergence behavior and its simple and flexible structure. This capability makes the algorithm preferable to be applied in several real-world applications in various fields, including healthcare systems, different engineering fields, and computer science. The popularity of HS urges us to provide a comprehensive survey of the literature on HS and its variants on health systems, analyze its strengths and weaknesses, and suggest future research directions. In this review paper, the current studies and uses of harmony search are studied in four main domains. (i) The variants of HS, including its modifications and hybridization. (ii) Summary of the previous review works. (iii) Applications of HS in healthcare systems. (iv) And finally, an operational framework is proposed for the applications of HS in healthcare systems. The main contribution of this review is intended to provide a thorough examination of HS in healthcare systems while also serving as a valuable resource for prospective scholars who want to investigate or implement this method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. An accurate slap fingerprint based verification system.
- Author
-
Gupta, Puneet and Gupta, Phalguni
- Subjects
- *
HUMAN fingerprints , *COMPUTER science , *NEURAL computers , *BIOMETRY , *ALGORITHMS , *DATABASES - Abstract
A slap fingerprint based verification system generally is highly secure. But it cannot handle the problems of temporal deformation and pose variation. In this paper, a template update algorithm has been proposed to ensure better performance of the system. During verification, each single fingerprint is matched and matching scores of all fingerprints are fused using an adaptive score level fusion. The performance of the proposed system has been evaluated on a challenging database containing 1800 slap-images acquired from 150 subjects. Experimental results show that the accuracy is increased by more than 3.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
38. Noise detection in the meta-learning level.
- Author
-
Garcia, Luís P.F., Carvalho, André C.P.L.F. de, and Lorena, Ana C.
- Subjects
- *
MACHINE learning , *DATA distribution , *COMPUTER science , *ALGORITHMS , *NEURAL computers , *PREDICTION theory - Abstract
The presence of noise in real data sets can harm the predictive performance of machine learning algorithms. There are several noise filtering techniques whose goal is to improve the quality of the data in classification tasks. These techniques usually scan the data for noise identification in a preprocessing step. Nonetheless, this is a non-trivial task and some noisy data can remain unidentified, while safe data can also be removed. The bias of each filtering technique influences its performance on a particular data set. Therefore, there is no single technique that can be considered the best for all domains or data distribution and choosing a particular filter is not straightforward. Meta-learning has been largely used in the last years to support the recommendation of the most suitable machine learning algorithm(s) for a new data set. This paper presents a meta-learning recommendation system able to predict the expected performance of noise filters in noisy data identification tasks. For such, a meta-base is created, containing meta-features extracted from several corrupted data sets along with the performance of some noise filters when applied to these data sets. Next, regression models are induced from this meta-base to predict the expected performance of the investigated filters in the identification of noisy data. The experimental results show that meta-learning can provide a good recommendation of the most promising filters to be applied to new classification data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
39. Active learning on anchorgraph with an improved transductive experimental design.
- Author
-
Fu, Weijie, Hao, Shijie, and Wang, Meng
- Subjects
- *
SUPERVISED learning , *ACTIVE learning , *ALGORITHMS , *GRAPH theory , *COMPUTER science , *NEURAL computers - Abstract
Anchorgraph based learning methods have met with success in modeling the large data for scalable semi-supervised learning. However, like most graph based learning algorithms, they are usually built with a randomly selected labeled set classified in advance. Although many pool-based active learning methods have been proposed, they often require a relatively large computational and storage consumption, which tends to impose extra burden on the learning system. Thus in this paper, we propose a novel active learning method named anchor-based transductive experimental design (ATED). By fully utilizing the representing power of anchors, the improved method efficiently enhances the performance of the original anchorgraph based learning while introduces much less extra cost on computation and storage. Extensive experimental results on real-world datasets have validated our approach in terms of classifying accuracy and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
40. INFFC: An iterative class noise filter based on the fusion of classifiers with noise sensitivity control.
- Author
-
Sáez, José A., Galar, Mikel, Luengo, Julián, and Herrera, Francisco
- Subjects
- *
ITERATIVE methods (Mathematics) , *NOISE control , *ALGORITHMS , *ARTIFICIAL intelligence , *COMPUTER science - Abstract
In classification, noise may deteriorate the system performance and increase the complexity of the models built. In order to mitigate its consequences, several approaches have been proposed in the literature. Among them, noise filtering, which removes noisy examples from the training data, is one of the most used techniques. This paper proposes a new noise filtering method that combines several filtering strategies in order to increase the accuracy of the classification algorithms used after the filtering process. The filtering is based on the fusion of the predictions of several classifiers used to detect the presence of noise. We translate the idea behind multiple classifier systems, where the information gathered from different models is combined, to noise filtering. In this way, we consider the combination of classifiers instead of using only one to detect noise. Additionally, the proposed method follows an iterative noise filtering scheme that allows us to avoid the usage of detected noisy examples in each new iteration of the filtering process. Finally, we introduce a noisy score to control the filtering sensitivity, in such a way that the amount of noisy examples removed in each iteration can be adapted to the necessities of the practitioner. The first two strategies (use of multiple classifiers and iterative filtering) are used to improve the filtering accuracy, whereas the last one (the noisy score) controls the level of conservation of the filter removing potentially noisy examples. The validity of the proposed method is studied in an exhaustive experimental study. We compare the new filtering method against several state-of-the-art methods to deal with datasets with class noise and study their efficacy in three classifiers with different sensitivity to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
41. Fuzzy-rough community in social networks.
- Author
-
Kundu, Suman and Pal, Sankar K.
- Subjects
- *
FUZZY sets , *SOCIAL networks , *COMPUTER science , *ALGORITHMS , *GRANULAR computing , *BIG data - Abstract
Community detection in a social network is a well-known problem that has been studied in computer science since early 2000. The algorithms available in the literature mainly follow two strategies, one, which allows a node to be a part of multiple communities with equal membership, and the second considers a disjoint partition of the whole network where a node belongs to only one community. In this paper, we proposed a novel community detection algorithm which identifies fuzzy-rough communities where a node can be a part of many groups with different memberships of their association. The algorithm runs on a new framework of social network representation based on fuzzy granular theory. A new index viz. normalized fuzzy mutual information, to quantify the goodness of detected communities is used. Experimental results on benchmark data show the superiority of the proposed algorithm compared to other well known methods, particularly when the network contains overlapping communities. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
42. Message scheduling for real-time interprocessor communication.
- Author
-
Waldherr, Stefan, Knust, Sigrid, and Aust, Stefan
- Subjects
- *
MULTIPROCESSORS , *TEXT messages , *REAL time scheduling (Computer science) , *COMPUTER science , *GRAPH coloring , *ALGORITHMS - Abstract
In this paper an efficient algorithm is proposed which optimizes periodic message scheduling in a real-time multiprocessor system. The system is based on a many-core single-chip computer architecture and uses a multistage baseline network for inter-core communication. Due to its basic architecture, internal blockings can occur during data transfers, i.e. the baseline network is not real-time capable by itself. Therefore, we propose a scheduling algorithm that may be performed before the execution of an application in order to compute a non-blocking schedule of periodic message transfers. Additionally, we optimize the clock rate of the network subject to the constraint that all data transfers can be performed in a non-blocking way. Our solution algorithm is based on a generalized graph coloring model and a randomized greedy approach. The algorithm was tested on some realistic communication scenarios as they appear in modern electronic car units. Computational results show the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
43. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects.
- Author
-
Ezugwu, Absalom E., Ikotun, Abiodun M., Oyelade, Olaide O., Abualigah, Laith, Agushaka, Jeffery O., Eke, Christopher I., and Akinyelu, Andronicus A.
- Subjects
- *
MACHINE learning , *ALGORITHMS , *ARTIFICIAL intelligence , *DATA mining , *COMPUTER science - Abstract
Clustering is an essential tool in data mining research and applications. It is the subject of active research in many fields of study, such as computer science, data science, statistics, pattern recognition, artificial intelligence, and machine learning. Several clustering techniques have been proposed and implemented, and most of them successfully find excellent quality or optimal clustering results in the domains mentioned earlier. However, there has been a gradual shift in the choice of clustering methods among domain experts and practitioners alike, which is precipitated by the fact that most traditional clustering algorithms still depend on the number of clusters provided a priori. These conventional clustering algorithms cannot effectively handle real-world data clustering analysis problems where the number of clusters in data objects cannot be easily identified. Also, they cannot effectively manage problems where the optimal number of clusters for a high-dimensional dataset cannot be easily determined. Therefore, there is a need for improved, flexible, and efficient clustering techniques. Recently, a variety of efficient clustering algorithms have been proposed in the literature, and these algorithms produced good results when evaluated on real-world clustering problems. This study presents an up-to-date systematic and comprehensive review of traditional and state-of-the-art clustering techniques for different domains. This survey considers clustering from a more practical perspective. It shows the outstanding role of clustering in various disciplines, such as education, marketing, medicine, biology, and bioinformatics. It also discusses the application of clustering to different fields attracting intensive efforts among the scientific community, such as big data, artificial intelligence, and robotics. This survey paper will be beneficial for both practitioners and researchers. It will serve as a good reference point for researchers and practitioners to design improved and efficient state-of-the-art clustering algorithms. • Provide an up-to-date comprehensive review of the different clustering techniques. • Highlight novel and most recent practical applications areas of clustering. • Provide a convenient research path for new researchers. • Help experts develop new algorithms for emerging challenges in the research area. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Adapting the CBA algorithm by means of intensity of implication.
- Author
-
Janssens, Davy, Wets, Geert, Brijs, Tom, and Vanhoof, Koen
- Subjects
- *
ALGORITHMS , *INFORMATION science , *COMPUTER science , *ELECTRONIC data processing - Abstract
In recent years, extensive research has been carried out by using association rules to build more accurate classifiers. The idea behind these integrated approaches is to focus on a limited subset of association rules. This paper aims to contribute to this integrated framework by adapting the Classification Based on Associations (CBA) algorithm. CBA was adapted by coupling it with another measurement of the quality of association rules: i.e. intensity of implication. The new algorithm has been implemented and empirically tested on an authentic financial dataset for purposes of bankruptcy prediction. We validated our results with an association ruleset, with C4.5, with original CBA and with CART by statistically comparing its performance via the area under the ROC-curve. The adapted CBA algorithm presented in this paper proved to generate significantly better results than the other classifiers at the 5% level of significance. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
45. Two structure-preserving-doubling like algorithms for obtaining the positive definite solution to a class of nonlinear matrix equation.
- Author
-
Huang, Na and Ma, Chang-Feng
- Subjects
- *
ALGORITHMS , *NONLINEAR equations , *MATRICES (Mathematics) , *ADDITION (Mathematics) , *COMPUTER science - Abstract
In this paper, we present two structure-preserving-doubling like algorithms for obtaining the positive definite solution of the nonlinear matrix equation X + A H X ¯ − 1 A = Q , where X ∈ C n × n is an unknown matrix and Q ∈ C n × n is a Hermitian positive definite matrix. We prove that the sequences generated by the algorithms converge to the positive definite solution of the considered matrix equation R-quadratically. In addition, we also present some numerical results to illustrate the behavior of the considered algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
46. Automatic custom instruction identification for application-specific instruction set processors.
- Author
-
Xiao, Chenglong, Casseau, Emmanuel, Wang, Shanshan, and Liu, Wanjun
- Subjects
- *
APPLICATION-specific instruction-set processors , *MICROPROCESSORS , *DATA flow computing , *ALGORITHMS , *COMPUTER science , *APPLICATION software - Abstract
The application-specific instruction set processors (ASIPs) have received more and more attention in recent years. ASIPs make trade-offs between flexibility and performance by extending the base instruction set of a general-purpose processor with custom functional units (CFUs). Custom instructions, executed on CFUs, make it possible to improve performance and achieve flexibility for extensible processors. The custom instruction synthesis flow involves two essential issues: custom instruction enumeration (subgraph enumeration) and custom instruction selection (subgraph selection). However, both enumerating all possible custom instructions of a given data-flow graph and selecting the most profitable custom instructions from the enumerated custom instructions are computationally difficult problems. In this paper, we propose efficient algorithms for custom instruction enumeration and custom instruction selection. Compared with previously proposed well-known enumeration algorithms, our approach can achieve a significant speedup while generating the identical set of all possible custom instructions or only connected custom instructions. Experimental results also show that a code size reduction rate up to 76% can be achieved for a set of computational intensive programs, and the speed-up achieved is up to 8.2×. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
47. Online algorithms for 1-space bounded 2-dimensional bin packing and square packing.
- Author
-
Zhang, Yong, Chin, Francis Y.L., Ting, Hing-Fung, Han, Xin, Poon, Chung Keung, Tsin, Yung H., and Ye, Deshi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *APPLIED mathematics , *COMPUTER software , *MATHEMATICAL programming , *TECHNOLOGY - Abstract
In this paper, we study 1-space bounded 2-dimensional bin packing and square packing. A sequence of rectangular items (square items) arrive one by one, each item must be packed into a square bin of unit size on its arrival without any information about future items. When packing items, 90°-rotation is allowed. 1-space bounded means there is only one “active” bin. If the “active” bin cannot accommodate the coming item, it will be closed and a new bin will be opened. The objective is to minimize the total number of bins used for packing all items in the sequence. Our contributions are as follows: For 1-space bounded 2-dimensional bin packing, we propose an online packing algorithm with a tight competitive ratio of 5.06. A lower bound of 3.17 on the competitive ratio is proven. Moreover, we study 1-space bounded square packing, where each item is a square with side length no more than 1. A 4.3-competitive algorithm is achieved, and a lower bound of 2.94 on the competitive ratio is given. All these bounds surpass the previously best known results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
48. A fast algorithm for data collection along a fixed track.
- Author
-
Cheong, O., El Shawi, R., and Gudmundsson, J.
- Subjects
- *
ACQUISITION of data , *ALGORITHMS , *WIRELESS sensor networks , *COMMUNICATION , *APPLIED mathematics , *VORONOI polygons , *COMPUTER science - Abstract
Recent research shows that significant energy saving can be achieved in wireless sensor networks (WSNs) with a mobile base station that collects data from sensor nodes via short-range communications. However, a major performance bottleneck of such WSNs is the significantly increased latency in data collection due to the low movement speed of mobile base stations. In this paper we study the problem of finding a data collection path for a mobile base station moving along a fixed track in a wireless sensor network to minimize the latency of data collection. The main contribution is an O ( m n log n ) expected time algorithm, where n is the number of sensors in the networks and m is the complexity of the fixed track. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
49. Schedules for marketing products with negative externalities.
- Author
-
Cao, Zhigang, Chen, Xujin, and Wang, Changjun
- Subjects
- *
SOCIAL networks , *POLYNOMIAL time algorithms , *DECISION making , *CONSUMERS , *ALGORITHMS , *COMPUTER science , *MARKETING - Abstract
With the fast development of social network services, network marketing of products with externalities has been attracting more and more attention from both academia and business. The extensive study on network marketing mainly concerns with positive externalities. The focus of this paper is on the much less understood counterpart for negative externalities, where a consumer has lower incentive to buy a product as the product is possessed by more social network neighbors. For a seller who markets these products, it is desirable to have a good schedule which specifies an order of consumers he approaches. We design polynomial time algorithms that find marketing schedules for products with negative externalities. The goals are two-fold: maximizing the product sale and ensuring consumer regret-free decisions. We show that the maximization is NP-hard. Our algorithms achieve satisfactory performance guarantees, approximating the maximum within constant factors in most of the cases. Two of these algorithms provide regret-proof schedules, reaching an equilibrium state where no consumers regret their previous decisions. Our work is the first attempt to address these marketing problems from an algorithmic point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
50. The string guessing problem as a method to prove lower bounds on the advice complexity.
- Author
-
Böckenhauer, Hans-Joachim, Hromkovič, Juraj, Komm, Dennis, Krug, Sacha, Smula, Jasmin, and Sprock, Andreas
- Subjects
- *
ALGORITHMS , *ORACLE software , *COMPUTER science , *INFORMATION science , *SET theory - Abstract
The advice complexity of an online problem describes the additional information both necessary and sufficient for online algorithms to compute solutions of a certain quality. In this model, an oracle inspects the input before it is processed by an online algorithm. Depending on the input string, the oracle prepares an advice bit string that is accessed sequentially by the algorithm. The number of advice bits that are read to achieve some specific solution quality can then serve as a fine-grained complexity measure. The main contribution of this paper is to study a powerful method for proving lower bounds on the number of advice bits necessary. To this end, we consider the string guessing problem as a generic online problem and show a lower bound on the number of advice bits needed to obtain a good solution. We use special reductions from string guessing to improve the best known lower bound for the online set cover problem and to give a lower bound on the advice complexity of the online maximum clique problem. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.