314 results
Search Results
2. Special Issue on Selected Papers from the 11th International Conference and Workshops on Algorithms and Computation (WALCOM 2017).
- Author
-
Rahman, Md. Saidur, Poon, Sheung-Hung, and Yen, Hsu-Chun
- Subjects
- *
CONFERENCES & conventions , *ALGORITHMS , *COMPUTATIONAL geometry , *POLYGONS - Published
- 2019
- Full Text
- View/download PDF
3. On the probe problem for (r,ℓ)-well-coveredness: Algorithms and complexity.
- Author
-
Faria, Luerbio and Souza, Uéverton S.
- Subjects
- *
INDEPENDENT sets , *CHARTS, diagrams, etc. , *ALGORITHMS - Abstract
• C -probe graphs are graphs having stable sets N for which edges can be added to obtain a supergraph in C. • A graph G is an (r , ℓ) -graph if V (G) can be partitioned into r independent sets and ℓ cliques. • A graph G is well-covered if every maximal independent set is also maximum. • This paper studies the complexity of recognizing probe graphs for the class of (r , ℓ) -well-covered graphs. Let C be a class of graphs. A graph G = (V , E) is C -probe if V (G) can be partitioned into two sets: non-probes N and probes P , where N is an independent set and new edges may be added between some non-probe vertices such that the resulting graph is in the class C. In this case, we say that (N , P) is a C - probe partition of G. In the Unpartitioned Probe problem for a graph class C we are given a graph G and asked whether G has a C -probe partition, i.e., such a problem consist of recognizing the class of C -probe graphs. A graph G = (V , E) is an (r , ℓ) -graph when V can be partitioned into (S 1 , S 2 , ... , S r , K 1 , K 2 , ... , K ℓ) such that S 1 , S 2 , ... , S r are independent sets, and K 1 , K 2 , ... , K ℓ are cliques. A graph G is well-covered if every maximal independent set is also maximum, and it is (r , ℓ) -well-covered if it is well-covered as well as an (r , ℓ) -graph. In this paper, we study the complexity of the Unpartitioned Probe problem for the class of (r , ℓ) -well-covered graphs. We classify all but the (2 , 0) and (1 , 2) cases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. New algorithms for fair k-center problem with outliers and capacity constraints.
- Author
-
Wu, Xiaoliang, Feng, Qilong, Xu, Jinhui, and Wang, Jianxin
- Subjects
- *
APPROXIMATION algorithms , *POLYNOMIAL approximation , *METRIC spaces , *POLYNOMIAL time algorithms , *ALGORITHMS - Abstract
The fair k -center problem has been paid lots of attention recently. In the fair k -center problem, we are given a set X of points in a metric space and a parameter k ∈ Z + , where the points in X are divided into several groups, and each point is assigned a color to denote which group it is in. The goal is to partition X into k clusters such that the number of cluster centers with each color is equal to a given value, and the k -center problem objective is minimized. In this paper, we consider the fair k -center problem with outliers and capacity constraints, denoted as the fair k -center with outliers (F k CO) problem and the capacitated fair k -center (CF k C) problem, respectively. The outliers constraints allow up to z outliers to be discarded when computing the objective function, while the capacity constraints require that each cluster has size no more than L. In this paper, we design an Fixed-Parameter Tractability (FPT) approximation algorithm and a polynomial approximation algorithm for the above two problems. In particular, our algorithms give (1 + ϵ) -approximations with FPT time for the F k CO and CF k C problems in doubling metric space. Moreover, we also propose a 3-approximation algorithm in polynomial time for the F k CO problem with some reasonable assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Formal verification of parallel prefix sum and stream compaction algorithms in CUDA.
- Author
-
Safari, Mohsen and Huisman, Marieke
- Subjects
- *
COMPACTING , *PARALLEL algorithms , *SUFFIXES & prefixes (Grammar) , *HIGH performance computing , *ROCK glaciers , *ALGORITHMS - Abstract
• We add CUDA verification support to the VerCors verifier. • We verify data race-freedom and functional correctness of CUDA implementations of two parallel prefix sum algorithms. • We verify data race-freedom and functional correctness of CUDA implementations of stream compaction algorithm. GPUs are an important part of any High Performance Computing (HPC) architecture. To make optimal use of the specifics of a GPU architecture, we need programming models that naturally support the parallel execution model of a GPU. CUDA and OpenCL are two widely used examples of such programming models. Furthermore, we also need to redesign algorithms such that they adhere to this parallel programming model, and we need to be able to prove the correctness of these redesigned algorithms. In this paper we study two examples of such parallelized algorithms, and we discuss how to prove their correctness (data race freedom and (partial) functional correctness) using the VerCors program verifier. First of all, we prove the correctness of two parallel algorithms solving the prefix sum problem. Second, we show how such a prefix sum algorithm is used as a basic block in a stream compaction algorithm, and we prove correctness of this stream compaction algorithm, taking advantage of the earlier correctness proof for the prefix sum algorithm. The proofs as described in this paper are developed over the CUDA implementations of these algorithms. In earlier work, we had already shown correctness of a more high-level version of the algorithm. This paper discusses how we add support to reason about CUDA programs in VerCors, and it then shows how we can redo the verification at the level of the CUDA code. We also discuss some practical challenges that we had to address to prove correctness of the actual CUDA-level verifications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Order based algorithms for the core maintenance problem on edge-weighted graphs.
- Author
-
Zhang, Feiteng, Liu, Bin, Liu, Zhenming, and Fang, Qizhi
- Subjects
- *
ALGORITHMS , *INTEGERS , *WEIGHTED graphs - Abstract
The cohesive subgraph k-core is a maximal connected subgraph with the minimum degree δ ≥ k of a simple graph, where integer k ≥ 0. Define the core number of a vertex w as the maximum k such that w is contained in a k -core. The core decomposition problem which is calculating the core numbers of all vertices in static graphs, and the core maintenance problem which is updating the core numbers in dynamic graphs are our main concern. Although, core numbers can be updated by the core decomposition algorithms, only a small part of vertices' core numbers have changed after the change of a graph. Thus, it is necessary to update core numbers locally to reduce the cost. In this paper, we study the core maintenance problem on edge-weighted graphs by using the vertex sequence k-order which is ordered by the order that the core decomposition algorithm removes vertices. We design the core maintenance algorithms for inserting one edge at a time and the method of updating the k -order, which reduce the searching range and the time cost evidently. For the removing case, we use the existing subcore algorithm to do the core maintenance and modify it with the method of updating k -order we design. Finally, we do extensive experiments to evaluate the effectiveness and the efficiency of our algorithms, which shows that the order based algorithm has a better performance than the existing for the core maintenance on edge-weighted graphs. • We solve the core maintenance problem on edge-weighted graphs with a tool named weighted k -order. • The weighted k -order, a vertex sequence, is defined on edge-weighted graphs for the first time. • We give the properties of vertices whose core numbers increase on the k-order for inserting case on edge-weighted graphs. • The order based core maintenance algorithm for inserting case on edge-weighted graphs is given, which is the best. • We give the methods to update the k-order after inserting or removing one edge for the next core maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Compact suffix automata representations for searching long patterns.
- Author
-
Faro, Simone and Scafiti, Stefano
- Subjects
- *
SUFFIXES & prefixes (Grammar) , *ROBOTS , *ALGORITHMS - Abstract
Automata based algorithms have always dominated the history of string matching, thanks to their intrinsic nature of recognizers. Among them, those based on the suffix automaton deserve a special mention, because its use has always led to elegant and very efficient solutions in practice. In this paper, we present two novel string matching algorithms based on the suffix automaton which are particularly suitable for very long strings. The first algorithm simulates the suffix automaton through the well known bit-parallelism approach, while the second makes use of an alternative non-standard simulation. We mainly focus on the second solution, which turns out to be more flexible in practice, and present some extensions of it. Experimental results suggest that both our solutions are very efficient in practical cases scaling much better when the length of the pattern increases. Moreover, our second approach is also competitive with the most effective solutions available for the exact string matching problem. • We present two novel suffix automata based algorithms specifically designed for very long strings. • We describe a generic non-standard approach for simulating the suffix automaton of a string. • Our first solution is a bit-parallel based extension of the BNDM algorithm which can be encoded using few bits. • Our second solution is related to the occurrence of unique factors in the pattern. • Experimental results suggest that both our solutions are very efficient in practical cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Faster parameterized algorithms for two vertex deletion problems.
- Author
-
Tsur, Dekel
- Subjects
- *
ALGORITHMS , *GRAPH algorithms , *INTEGERS - Abstract
In the l -Path Vertex Cover problem (resp., the l -Component Order Connectivity problem) the input is an undirected graph G and an integer k. The goal is to decide whether there is a set of vertices of size at most k whose deletion from G results in a graph that does not contain a path with l vertices (resp., does not contain a connected component with at least l vertices). In this paper we give a parameterized algorithm for l -Path Vertex Cover when l = 5 , 6 , 7 , whose running times are O ⁎ (3.945 k) , O ⁎ (4.947 k) , and O ⁎ (5.951 k) , respectively. We also give an algorithm for l -Component Order Connectivity whose running time is O ⁎ ((l − 1 − ϵ l) k) for every l ≥ 4 , where ϵ l > 0 for every l. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. How majority-vote crossover and estimation-of-distribution algorithms cope with fitness valleys.
- Author
-
Witt, Carsten
- Subjects
- *
MATHEMATICAL analysis , *ALGORITHMS , *EVOLUTIONARY computation , *VALLEYS , *EVOLUTIONARY algorithms - Abstract
The benefits of using crossover in crossing fitness gaps have been studied extensively in evolutionary computation. Recent runtime results show that majority-vote crossover is particularly efficient at optimizing the well-known Jump benchmark function that includes a fitness gap next to the global optimum. Also estimation-of-distribution algorithms (EDAs), which use an implicit crossover, are much more efficient on Jump than typical mutation-based algorithms. However, the allowed gap size for polynomial runtimes with EDAs is at most logarithmic in the problem dimension n. In this paper, we investigate variants of the Jump function where the gap is shifted and appears in the middle of the typical search trajectory. Such gaps can still be overcome efficiently in time O (n log n) by majority-vote crossover and an estimation-of-distribution algorithm, even for gap sizes almost n. However, if the global optimum is located in the gap instead of the usual all-ones string, majority-vote crossover would nevertheless approach the all-ones string and be highly inefficient. In sharp contrast, an EDA can still find such a shifted optimum efficiently. Thanks to a general property called fair sampling , the EDA will with high probability sample from almost every fitness level of the function, including levels in the gap, and sample the global optimum even though the overall search trajectory points towards the all-ones string. Finally, we derive limits on the gap size allowing efficient runtimes for the EDA. • We study 2 algorithms using majority-vote crossover and an estimation-of-distribution algorithm on modified jump functions. • We derive theorems on the algorithms' runtime using rigorous mathematical analyses. • All 3 algorithms can overcome the fitness gap of the jump functions efficiently for moderate sizes of the gap. • All but the estimation-of-distribution algorithm usually fail to find an optimum located within the gap. • The estimation-of-distribution is efficient since it samples fairly on all fitness levels towards the optimum. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Execution trace sets for real computation.
- Author
-
Thompson, Declan
- Subjects
- *
ALGORITHMS , *MACHINERY - Abstract
• Analyses general algorithms as sets of sequences (tasks) over a domain. • Defines an equivalence relation on sets of sequences to recover algorithmic tasks. • Characterises the execution trace sets of Blum-Shub-Smale machines. • Compares trace-based approaches to algorithms to iterator-based accounts. Traditional models of computation and algorithms take a constructive approach, essentially considering a transition function over a given state space. Yet algorithms may also be considered in terms of the observed behaviour they generate. This paper marries these perspectives with a domain-independent, trace based account of computational and algorithmic processes. We focus on the model of real computation due to Blum, Shub and Smale [3] and provide a precise characterisation of the execution trace sets generated by their machines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. Tight competitive analyses of online car-sharing problems.
- Author
-
Liang, Ya-Chun, Lai, Kuan-Yun, Chen, Ho-Lin, Iwama, Kazuo, and Liao, Chung-Shou
- Subjects
- *
CAR sharing , *ALGORITHMS , *AUTOMOBILES - Abstract
The online car-sharing problem finds many real-world applications. The problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called k S2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2 for k = 2 and 1.5 for a restricted value of k , i.e., a multiple of three. In this paper, we remove all the holes of unknown CR's; namely we prove that the CR is 2 k k + ⌊ k / 3 ⌋ for all k ≥ 2. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even further; precisely we can achieve a CR of 2 + R 3 if the number of requests in each stage is at most Rk , 1 ≤ R ≤ 2 , where we do not have to know the value of R in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR's, and prove some lower bounds to show the tightness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. MUL-tree pruning for consistency and optimal reconciliation - complexity and algorithms.
- Author
-
Gascon, Mathieu, Dondi, Riccardo, and El-Mabrouk, Nadia
- Subjects
- *
RECONCILIATION , *TREE pruning , *GENETIC techniques , *ALGORITHMS , *GENE families , *PHYLOGENY - Abstract
A multi-labeled tree (or MUL-tree) is a rooted tree in which every leaf is labeled by an element from some set X , but in which more than one leaf may be labeled by the same element of X. MUL-trees have applications in many fields. In phylogenetics, they can represent the evolution of gene families, where genes are represented by the species they belong to, the non-uniqueness of leaf-labels coming from the fact that a given genome may contain many paralogous genes (genes related through duplication). In this paper, we consider two problems related to the leaf-pruning (leaf removal) of MUL-trees leading to single-labeled trees. First, given a set of MUL-trees, the MUL-tree Set Pruning for Consistency (MULSETPC) Problem asks for a pruning of each tree leading to a set of consistent trees, i.e. a collection of label-isomorphic single-labeled trees. Second, processing one tree at a time, the MUL-tree Pruning for Reconciliation (MULPR) Problem asks for a pruning that minimizes a "reconciliation" cost, i.e. an evolutionary cost of the gene family through duplication, given a known species tree. We show that MULSETPC is NP-hard and that MULPR is W[2]-hard when parameterized by the duplication cost, NP-hard when the number of occurrences of each label in a MUL-tree is bounded by two and that the optimization version of MULPR is hard to approximate within factor (1 − o (1)) ln | X |. We then develop a polynomial-time heuristic for MULPR and show its accuracy by comparing it to a brute-force exact method on a set of gene trees from the Ensembl Genome Browser. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Streaming algorithms for monotone non-submodular function maximization under a knapsack constraint on the integer lattice.
- Author
-
Tan, Jingjing, Wang, Fengmin, Ye, Weina, Zhang, Xiaoqing, and Zhou, Yang
- Subjects
- *
INTEGERS , *MAXIMAL functions , *BACKPACKS , *ALGORITHMS , *EXPECTATION-maximization algorithms - Abstract
The study of non-submodular maximization on the integer lattice is an important extension of submodular optimization. In this paper, streaming algorithms for maximizing non-negative monotone non-submodular functions with knapsack constraint on integer lattice are considered. We first design a two-pass StreamingKnapsack algorithm combining with BinarySearch as a subroutine for this problem. By introducing the DR ratio γ and the weak DR ratio γ w of the non-submodular objective function, we obtain that the approximation ratio is min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } , the total memory complexity is O (K log K / ε) , and the total query complexity for each element is O (log K log (K / ε 2) / ε). Then, we design a one-pass streaming algorithm by dynamically updating the maximal function value among unit vectors along with the currently arriving element. Finally, in order to decrease the memory complexity, we design an improved StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). • A two-pass streaming algorithm is proposed for maximizing monotone non-submodular functions with knapsack constraint on integer lattice. • By introducing DR ratio γ d (weak DR ratio γ w) we obtain the approximation ratio as min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } with memory O (K log K / ε). • A one-pass streaming algorithm is designed by dynamically updating the maximal function value among unit vectors. • We design an improve StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Loosely-stabilizing maximal independent set algorithms with unreliable communications.
- Author
-
Dong, Rongcheng, Sudo, Yuichi, Izumi, Taisuke, and Masuzawa, Toshimitsu
- Subjects
- *
INDEPENDENT sets , *ALGORITHMS - Abstract
Self-stabilization is a promising paradigm for designing highly adaptive distributed systems. However, it cannot be realized when the communication between processes is unreliable with some constant probability. To circumvent such impossibility, this paper adopts the concept of loose-stabilization for the first time, which is a practical alternative of self-stabilization, and proposes three systematic approaches to realize loose-stabilization in the atomic-state model with probabilistically erroneous communications , namely, the redundant-state approach, the step-up approach, and the repetition approach. Further, we apply these approaches to design three corresponding loosely-stabilizing algorithms for the maximal independent set problem. • Self-stabilization solutions are precluded for most of non-trivial problems in probabilistic error models. • Loose-stabilization is a relaxed variant of self-stabilization and practically equivalent to it. • Loose-stabilization can circumvent the impossibilities of self-stabilization in probabilistic error models. • Adopt the concept of loose-stabilization in the atomic-state model for the first time. • Three loose-stabilizing algorithms for maximal independent set problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Smallest number of vertices in a 2-arc-strong digraph without good pairs.
- Author
-
Gu, Ran, Gutin, Gregory, Li, Shasha, Shi, Yongtang, and Taoqiu, Zhenyu
- Subjects
- *
PROBLEM solving , *ALGORITHMS - Abstract
Branchings play an important role in digraph theory and algorithms. In particular, a chapter in the monograph of Bang-Jensen and Gutin, Digraphs: Theory, Algorithms and Application, Ed. 2, 2009 is wholly devoted to branchings. The well-known Edmonds Branching Theorem provides a characterization for the existence of k arc-disjoint out-branchings rooted at the same vertex. A short proof of the theorem by Lovász (1976) leads to a polynomial-time algorithm for finding such out-branchings. A natural related problem is to characterize digraphs having an out-branching and an in-branching which are arc-disjoint. Such a pair of branchings is called a good pair. Bang-Jensen, Bessy, Havet and Yeo (2022) pointed out that it is NP-complete to decide if a given digraph has a good pair. They also showed that every digraph of independence number at most 2 and arc-connectivity at least 2 has a good pair, which settled a conjecture of Thomassen for digraphs of independence number 2. Then they asked for the smallest number n n g p of vertices in a 2-arc-strong digraph which has no good pair. They proved that 7 ≤ n n g p ≤ 10. In this paper, we prove that n n g p = 10 , which solves the open problem. • The study of good pair has relations with Tutte's Packing Theorem and Edmonds Branching Theorem. • We answer a problem by Bang-Jensen, Bessy, Havet and Yeo about the minimum order of 2-arc-strong digraphs without good pairs. • We study the relationship between Hamilton dipath and good pair. • We generalize some results by Bang-Jensen, Bessy, Havet and Yeo on good pair. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Algorithms for diameters of unicycle graphs and diameter-optimally augmenting trees.
- Author
-
Wang, Haitao and Zhao, Yiming
- Subjects
- *
DIAMETER , *PROBLEM solving , *ALGORITHMS , *TREES , *SAWLOGS - Abstract
We consider the problem of computing the diameter of a unicycle graph (i.e., a graph with a unique cycle). We present an O (n) time algorithm for the problem, where n is the number of vertices of the graph. This improves the previous best O (n log n) time solution [Oh and Ahn, ISAAC 2016]. Using this algorithm as a subroutine, we solve the problem of adding a shortcut to a tree so that the diameter of the new graph (which is a unicycle graph) is minimized; our algorithm takes O (n 2 log n) time and O (n) space. The previous best algorithms solve the problem in O (n 2 log 3 n) time and O (n) space [Oh and Ahn, ISAAC 2016], or in O (n 2) time and O (n 2) space [Bilò, ISAAC 2018]. • The paper presents a linear time algorithm for computing the diameter of unicycle graphs. • The paper gives a new algorithm which finds a new edge to add to a tree so that the diameter of the new graph is minimized. • The algorithms rely on certain domination relations among edges of the graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. The Steiner k-eccentricity on trees.
- Author
-
Li, Xingfu, Yu, Guihai, Klavžar, Sandi, Hu, Jie, and Li, Bo
- Subjects
- *
TREES , *ALGORITHMS , *GRAPH algorithms - Abstract
We study the Steiner k -eccentricity on trees, which generalizes the previous one in the paper [On the average Steiner 3-eccentricity of trees, arXiv:2005.10319 ]. We achieve much stronger properties for the Steiner k -ecc tree than that in the previous paper. Based on this, a linear time algorithm is devised to calculate the Steiner k -eccentricity of a vertex in a tree. On the other hand, lower and upper bounds of the average Steiner k -eccentricity index of a tree on order n are established based on a novel technique which is quite different and also much easier to follow than the earlier one. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Set-constrained delivery broadcast: A communication abstraction for read/write implementable distributed objects.
- Author
-
Imbs, Damien, Mostéfaoui, Achour, Perrin, Matthieu, and Raynal, Michel
- Subjects
- *
ALGORITHMS , *BROADCASTING industry - Abstract
• New fault-tolerant communication abstraction. • Simple implementations of read/write implementable objects. • Efficient implementations of snapshots and lattice agreement. This paper introduces a new communication abstraction, called Set-Constrained Delivery Broadcast (SCD-broadcast), whose aim is to provide its users with an appropriate abstraction level when they have to implement objects or distributed tasks in an asynchronous message-passing system prone to process crash failures. This abstraction allows each process to broadcast messages and deliver a sequence of sets of messages in such a way that, if a process delivers a set of messages including a message m and later delivers a set of messages including a message m ′ , no process delivers first a set of messages including m ′ and later a set of message including m. After having presented an algorithm implementing SCD-broadcast, the paper investigates its programming power and its computability limits. On the "power" side it presents SCD-broadcast-based algorithms, which are both simple and efficient, building objects (such as snapshot and conflict-free replicated data types), and distributed tasks. On the "computability limits" side it shows that SCD-broadcast and read/write registers are computationally equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Approximation algorithms for fuzzy C-means problem based on seeding method.
- Author
-
Liu, Qian, Liu, Jianxin, Li, Min, and Zhou, Yang
- Subjects
- *
APPROXIMATION algorithms , *ALGORITHMS , *FUZZY algorithms , *SEEDS - Abstract
• A new seeding algorithm is proposed for the fuzzy C-means problem. • Theoretical guarantees for both the new seeding algorithm and the fuzzy C-means++ algorithm are given. • Numerical experiments show the effectiveness of the new algorithm. As a kind of important soft clustering model, the fuzzy C -means method is widely applied in many fields. In this method, instead of the strict distributive ability in the classical k -means method, all the sample points are endowed with degrees of membership to each center to depict the fuzzy clustering. In this paper, we show that the fuzzy C -means++ algorithm, which introduces the k -means++ algorithm as a seeding strategy, gives a solution for which the approximation guarantee is O (k 2 ln k). A novel seeding algorithm is then designed based on the contribution of the fuzzy potential function, which improves the approximation ratio to O (k ln k). Preliminary numerical experiments are proposed to support the theoretical results of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Incidence coloring of Mycielskians with fast algorithm.
- Author
-
Bi, Huimin and Zhang, Xin
- Subjects
- *
ALGORITHMS , *GRAPH coloring , *ASSIGNMENT problems (Programming) , *COLORING matter , *COLORS - Abstract
An incidence of a graph G is a vertex-edge pair (v , e) such that the vertex v is incident with the edge e. A proper incidence k-coloring of a graph is a coloring of its incidences involving k colors so that two incidences (u , e) and (w , f) receive distinct colors if and only if u = w , or e = f , or u w ∈ { e , f }. In this paper, we present some idea of using the incidence coloring to model a kind of multi-frequency assignment problem , in which each transceiver can be simultaneously in both sending and receiving modes, and then establish some theoretical and algorithmic aspects of the incidence coloring. Specifically, we conjecture that if G is the Mycielskian of some graph then it has a proper incidence (Δ (G) + 2) -coloring. Actually, our conjecture is motivated by the " (Δ + 2) conjecture" of Brualdi and Quinn Massey in 1993, which states that every graph G has a proper incidence (Δ (G) + 2) -coloring, and was disproved in 1997 by Guiduli, who pointed out that the Paley graphs with large maximum degree are counterexamples (yet they are all known counterexamples to the " (Δ + 2) conjecture", and are not Mycielskians of any graph). To support our conjecture, we prove in this paper that if G is the Mycielskian of a graph H with | H | ≥ 3 Δ (H) + 2 , then we can construct a proper incidence (Δ (G) + 1) -coloring of G in cubic time, and if G is the Mycielskian of an incidence (Δ (H) + 1) -colorable graph H with | H | ≤ 2 Δ (H) , or the Mycielskian of an incidence (Δ (H) + 2) -colorable graph H with | H | ≥ 2 Δ (H) + 1 , then G has a proper incidence (Δ (G) + 2) -coloring. The minimum positive integer k such that the Mycielskian of a cycle or a path has a proper incidence k -coloring is also determined. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. A self-stabilizing algorithm for constructing a minimal reachable directed acyclic graph with two senders and two targets.
- Author
-
Kim, Yonghwan, Shibata, Masahiro, Sudo, Yuichi, Nakamura, Junya, Katayama, Yoshiaki, and Masuzawa, Toshimitsu
- Subjects
- *
DIRECTED acyclic graphs , *ALGORITHMS , *UNDIRECTED graphs , *DIRECTED graphs - Abstract
In this paper, we introduce a new graph structure named an ST -reachable directed acyclic graph which is a directed acyclic graph (DAG) that guarantees reachability (i.e., a directed path exists) from every sender to every target. When an arbitrarily connected undirected graph G = (V , E) and two sets of the vertices, senders S (⊂ V) and targets T (⊂ V), are given, we consider the construction of a minimal ST -reachable DAG by changing some undirected edges to arcs and removing the remaining edges. In this paper, we present the necessary and sufficient condition under which a minimal ST -reachable DAG can be constructed when | S | ≤ 2 and | T | ≤ 2 , and propose a self-stabilizing algorithm for constructing a minimal ST -reachable DAG (if it exists) when an arbitrarily connected undirected graph, S (| S | ≤ 2) and T (| T | ≤ 2) are given. Moreover, our proposed algorithm can detect the non-existence of any ST -reachable DAG if the ST -reachable DAG of the given graph and two sets of vertices, S and T , do not exist. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. New approximation algorithms for the heterogeneous weighted delivery problem.
- Author
-
Bilò, Davide, Gualà, Luciano, Leucci, Stefano, Proietti, Guido, and Rossi, Mirko
- Subjects
- *
APPROXIMATION algorithms , *WEIGHTED graphs , *POLYNOMIAL time algorithms , *ENERGY consumption , *ALGORITHMS - Abstract
• We study the heterogeneous weighted delivery (HWD) problem, which was introduced in [Bärtschi et al., STACS'17]. • We efficiently 8-approximate HWD with k = O (log n) agents, closing an open problem in [Bärtschi et al., ATMOS'17]. • We design a O (k) -approximation algorithm that runs in polynomial time regardless of the value of k. • We design a polynomial-time O ˜ (log 3 n) -approximation algorithm. • We show that the HWD problem is 36-approximable in polynomial time when each agent has one of two possible consumption rates. We study the heterogeneous weighted delivery (HWD) problem introduced in [Bärtschi et al., STACS'17] where k heterogeneous mobile agents (e.g., robots, vehicles, etc.), initially positioned on vertices of an n -vertex edge-weighted graph G , have to deliver m messages. Each message is initially placed on a source vertex of G and needs to be delivered to a target vertex of G. Each agent can move along the edges of G and carry at most one message at any time. Each agent has a rate of energy consumption per unit of traveled distance and the goal is that of delivering all messages using the minimum overall amount of energy. This problem has been shown to be NP-hard even when k = 1 , and is 4 ρ -approximable where ρ is the ratio between the maximum and minimum energy consumption of the agents. In this paper, we provide approximation algorithms with approximation ratios independent of the energy consumption rates. First, we design a polynomial-time 8-approximation algorithm for k = O (log n) , closing a problem left open in [Bärtschi et al., ATMOS'17]. This algorithm can be turned into an O (k) -approximation algorithm that always runs in polynomial-time, regardless of the values of k. Then, we show that HWD problem is 36-approximable in polynomial-time when each agent has one of two possible consumption rates. Finally, we design a polynomial-time O ˜ (log 3 n) -approximation algorithm for the general case. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. APX-hardness and approximation for the k-burning number problem.
- Author
-
Mondal, Debajyoti, Rajasingh, Angelin Jemima, Parthiban, N., and Rajasingh, Indra
- Subjects
- *
CONTAGION (Social psychology) , *NP-hard problems , *APPROXIMATION algorithms , *DIFFUSION processes , *INFORMATION processing , *GENERALIZATION , *ALGORITHMS - Abstract
Consider an information diffusion process on a graph G that starts with k > 0 burnt vertices, and at each subsequent step, burns the neighbors of the currently burnt vertices, as well as k other unburnt vertices. The k-burning number of G is the minimum number of steps b k (G) such that all the vertices can be burned within b k (G) steps. Note that the last step may have smaller than k unburnt vertices available, where all of them are burned. The 1-burning number coincides with the well-known burning number problem, which was proposed to model the spread of social contagion. The generalization to k -burning number allows us to examine different worst-case contagion scenarios by varying the spread factor k. In this paper we prove that computing k -burning number is APX-hard, for any fixed constant k. We then give an O ((n + m) log n) -time 3-approximation algorithm for computing k -burning number, for any k ≥ 1 , where n and m are the number of vertices and edges, respectively. Finally, we show that even if the burning sources are given as an input, computing a burning sequence itself is an NP-hard problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. Almost optimal algorithms for diameter-optimally augmenting trees.
- Author
-
Bilò, Davide
- Subjects
- *
METRIC spaces , *TREE graphs , *TREES , *ALGORITHMS , *PROBLEM solving - Abstract
We consider the problem of augmenting an n -vertex tree with one shortcut in order to minimize the diameter of the resulting graph. The tree is embedded in an unknown space and we have access to an oracle that, when queried on a pair of vertices u and v , reports the weight of the shortcut (u , v) in constant time. Previously, the problem was solved in O (n 2 log 3 n) time for general weights [Oh and Ahn, ISAAC 2016], in O (n 2 log n) time for trees embedded in a metric space [Große et al. , Int. J. FCS], and in O (n log n) time for paths embedded in a metric space [Wang, Comput. Geom.]. Very recently an algorithm for general weights requiring O (n 2 log n) time and O (n) space has been developed [Wang and Zhao, Theor. Comp. Science]. Finally, a (1 + ε) -approximation algorithm running in O (n + 1 / ε 3) has been designed for paths embedded in R d , for constant values of d [Große et al. , Int. J. FCS]. In this paper we design an asymptotic time-optimal algorithm for trees and general edge-weights that requires O (n 2) time and O (n log n) space. Moreover, for trees embedded in a metric space, we design (i) an exact O (n log n) -time algorithm and (ii) a (1 + ε) -approximation algorithm that runs in O (n + ε − 1 log ε − 1) time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. A simple linear time algorithm to solve the MIST problem on interval graphs.
- Author
-
Li, Peng, Shang, Jianhui, and Shi, Yi
- Subjects
- *
GRAPH connectivity , *PROBLEM solving , *ALGORITHMS , *GRAPH algorithms , *TELECOMMUNICATION systems , *POLYNOMIAL time algorithms - Abstract
Motivated by the design of cost-efficient communication networks, the problem about Maximum Internal Spanning Tree that arises in a connected graph, is proposed to find a spanning tree with the maximum number of internal vertices. In 2018, Xingfu Li et al. presented a polynomial algorithm to find a maximum internal spanning tree in a connected interval graph. Based on the structure of normal orderings on interval graphs, we present a simple linear time algorithm that solve the problem when restricted to connected interval graphs in this paper. The proof provides additional insight about the linear time algorithm on interval graphs. • The MIST problem is solvable in a simple linear time algorithm under the restriction of connected interval graphs. • The structure of normal orderings on interval graphs is proposed to solve the Maximum Internal Spanning Tree problem. • The proof of the main results is more easy to understand with proper cases to prove. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Computing the longest common almost-increasing subsequence.
- Author
-
Bhuiyan, Mohammad Tawhidul Hasan, Alam, Muhammad Rashed, and Rahman, M. Sohel
- Subjects
- *
PROBLEM solving - Abstract
In this paper, we revisit the problem of computing longest common almost increasing subsequence (LCAIS) where, given two input sequences, the goal is to compute a common subsequence that is 'almost' increasing. Here the concept of an almost increasing subsequence offers an interesting relaxation over the increasing condition. This problem has been studied in the literature albeit with some constraints. Here, we present a number of a number of algorithmic approaches to solve the problem more generally and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. The work function algorithm for the paging problem.
- Author
-
Zhang, Wenming, Hu, Huan, Cheng, Yongxi, Li, Hongmei, and Wang, Haizhen
- Subjects
- *
ALGORITHMS , *ONLINE algorithms , *DECISION making - Abstract
The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults in a two-level store. It is a special case of the famous k -server problem. In this paper, we firstly show that WFA is k M / m -competitive for the k -server problem where the distances between two points are all in [ m , M ] (0 < m ≤ M), and thus WFA is k -competitive for the paging problem. And we further show that the well known F I F O for the paging problem is just a special case of WFA. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Efficient algorithms for ride-hitching in UAV travelling.
- Author
-
Li, Songhua, Li, Minming, Duan, Lingjie, and Lee, Victor C.S.
- Subjects
- *
ONLINE algorithms , *TRAVEL time (Traffic engineering) , *GREEDY algorithms , *DRONE aircraft , *ALGORITHMS - Abstract
The unmanned aerial vehicle (UAV) has emerged as a promising solution to provide delivery and other mobile services to customers rapidly, yet it drains its stored energy quickly when travelling on the way and (even if solar-powered) it takes time for recharging on the way before reaching the destination. To address this issue, existing works focus more on UAV's offline path planning with designated system vehicles providing charging service. Nevertheless, in some emergency cases and rural areas where system vehicles are not available, public vehicles can provide more cost-saving and feasible service in UAV travelling. In this paper, we explore how a single UAV can save flying distance by exploiting public vehicles for the purpose of minimizing the overall travel time of the UAV, which is from the perspective of online algorithm. For the offline setting where the information of future vehicles is known far ahead of time, we present an O (n 2) -time shortest-path-like optimal solution by delicately transforming the problem into a graph capturing both time and energy constraints. For the online setting where public vehicles appear in real-time and only inform the UAV of their trip information some certain time Δ t beforehand, we first construct lower bounds on the competitive ratio for different Δ t. Then, we propose two online algorithms, including a greedy algorithm MyopicHitching that greedily hitches truck rides and an improved algorithm Δ t - Adaptive that further tolerates a waiting time in hitching a ride. Our theoretical analysis shows that Δ t - Adaptive is asymptotically optimal in the sense that its ratio approaches the proposed lower bounds as Δ t increases. • Mathematical formulation of ride-hitching in UAV travelling. • An O (n 2) -time shortest-path-like optimal offline algorithm. • Lower bounds on the competitive ratio of the problem. • Find a defect of a greedy online algorithm. • A near-optimal time-adaptive online algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Approximation algorithm for prize-collecting sweep cover with base stations.
- Author
-
Liang, Wei and Zhang, Zhao
- Subjects
- *
APPROXIMATION algorithms , *ALGORITHMS - Abstract
• Propose a new problem: prize-collecting sweep cover problem. • Give a 5-LMP algorithm for prize-collecting sweep cover assuming constant number of base-stations. • Give a 2-LMP algorithm for the rooted prize-collecting forest with k components problem. In a sweep cover problem, positions of interest (PoIs) are required to be visited periodically by mobile sensors. In this paper, we propose a new sweep cover problem: the prize-collecting sweep cover problem (PCSC), in which penalty is incurred by those PoIs which are not sweep-covered, and the goal is to minimize the covering cost plus the penalty. Assuming that every mobile sensor has to be linked to some base station, and the number of base stations is upper bounded by a constant, we present a 5-LMP (Lagrangian Multiplier Preserving) algorithm. As a step stone, we propose the prize-collecting forest with k components problem (PCF k), which might be interesting in its own sense, and presented a 2-LMP for rooted PCF k. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. One-visibility cops and robber on trees: Optimal cop-win strategies.
- Author
-
Yang, Boting
- Subjects
- *
ROBBERS , *TREES , *ALGORITHMS - Abstract
• For trees, we introduce a key structure, called road, for characterising optimal cop-win strategies. • We give an O (n ﹨ log n) time algorithm to compute an optimal cop-win strategy for a tree with n vertices. • We establish relations between zero-visibility and one-visibility copnumbers on trees. In the one-visibility cops and robber game on a graph, the robber is visible to the cops only when the robber is in the closed neighbourhood of the vertices occupied by the cops. The one-visibility copnumber of a graph is the minimum number of cops required to capture the robber on the graph. In this paper, we investigate the one-visibility cops and robber game on trees. For trees, we introduce a key structure, called road, for characterising optimal cop-win strategies. We give an O (n log n) time algorithm to compute an optimal cop-win strategy for a tree with n vertices. We also establish relations between zero-visibility and one-visibility copnumbers on trees. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. A new algorithm for computing the nearest polynomial to multiple given polynomials via weighted ℓ2,q-norm minimization and its complex extension.
- Author
-
Hu, Wenyu, Huang, Huiying, Zhang, Rong, Huang, Jinhong, and Yi, Yun
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *PROBLEM solving , *CONCAVE functions - Abstract
A new algorithm is proposed in this paper for computing the nearest polynomial to multiple given polynomials with a given zero in the real case, where the distance between polynomials is defined by the weighted ℓ 2 , q norm (0 < q ≤ 2). First, the problem is formulated as a univariate constrained non-convex minimization problem, where prior information of the coefficients of the polynomial can be embedded by selecting proper weights. Then, an iteratively reweighted algorithm is designed to solve the obtained problem, and also the convergence and rate of convergence are uniformly demonstrated for all q in (0 , 2 ]. Since all the existing methods for computing the nearest polynomial to multiple given polynomials with a given zero are limited to the real case, we ingeniously extend the results to the complex case. Finally, two representative examples that separately compute the nearest real and complex polynomials are presented to show the effectiveness of the proposed algorithm. • A weighted ℓ 2 , q -norm based minimization problem is constructed with prior information of polynomial embedded. • After converting the problem into a univariate problem, an iteratively reweighted algorithm is proposed to solve it. • Using properties of a concave function, convergence and rate of convergence of the proposed algorithm are provided. • It is the first time to study computation of the nearest polynomial to multiple given polynomials with a given zero in the complex case. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Self-stabilizing synchronous unison in directed networks.
- Author
-
Altisen, Karine, Cournier, Alain, Defalque, Geoffrey, and Devismes, Stéphane
- Subjects
- *
DIRECTED graphs , *UNDIRECTED graphs , *TOPOLOGY , *SYNCHRONIZATION , *ALGORITHMS - Abstract
Self-stabilization is a general paradigm that characterizes the ability of a distributed system to recover from transient faults. Since its introduction by Dijkstra in 1974, self-stabilization has been successfully applied to efficiently solve many networking tasks. However, most of the literature focuses on bidirectional networks. Now, in today's networks such as WSNs, some communication channels may be one-way only. Considering such network topologies, a.k.a. directed graphs, makes self-stabilization more complicated, and sometimes even impossible. In this paper, we investigate the gap in terms of requirements and efficiency when considering a directed graph instead of an undirected one as network topology for a self-stabilizing algorithm. Our case study is a variant of a synchronous unison algorithm proposed by Arora et al.; the synchronous unison being a clock synchronization problem. • We address self-stabilization of dynamic problems in anonymous directed networks. • We propose a topology-related necessary condition to solve synchronous unison in directed networks. • We present a simple self-stabilizing synchronous unison algorithm for directed networks. • We study conditions under which the proposed algorithm is self-stabilizing, and at what cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A hybrid modified PSO algorithm for the inverse p-median location problem in fuzzy random environment.
- Author
-
Taghikhani, Sepideh, Baroughi, Fahimeh, and Alizadeh, Behrooz
- Subjects
- *
PARTICLE swarm optimization , *VALUE at risk , *ALGORITHMS , *RANDOM variables , *NP-hard problems - Abstract
This paper considers the inverse p -median location problem with variable edge lengths and variable vertex weights on general graphs in which the modification costs are the fuzzy random variables. We present a model for the problem in fuzzy random environment in which the objective value is computed by conditional value at risk criterion. Then, we show that the problem is NP-hard under this criterion. Therefore, a new hybrid modified particle swarm optimization algorithm is proposed to obtain the approximate optimal solution of the proposed model. Finally, computational experiments are given to illustrate high efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Complexity of a root clustering algorithm for holomorphic functions.
- Author
-
Batra, Prashant and Sharma, Vikram
- Subjects
- *
ALGORITHMS , *EXPONENTIAL functions , *TRIGONOMETRIC functions , *HOLOMORPHIC functions , *TREE size , *COMPUTER workstation clusters , *TRANSCENDENTAL functions , *SUBDIVISION surfaces (Geometry) - Abstract
Approximating the roots of a holomorphic function in an input box is a fundamental problem in many domains. Most algorithms in the literature for solving this problem are conditional, i.e., they make some simplifying assumptions, such as, all the roots are simple or there are no roots on the boundary of the input box, or the underlying machine model is Real RAM. Root clustering is a generalization of the root approximation problem that allows for errors in the computation and makes no assumption on the multiplicity of the roots. An unconditional algorithm for computing a root clustering of a holomorphic function was given by Yap, Sagraloff and Sharma in 2013 [36]. They proposed a subdivision based algorithm using effective predicates based on Pellet's test while avoiding any comparison with zeros (using soft zero comparisons instead). In this paper, we analyze the running time of their algorithm. We use the continuous amortization framework to derive an upper bound on the size of the subdivision tree. We specialize this bound to the case of polynomials and some simple transcendental functions such as exponential and trigonometric sine. We show that the algorithm takes exponential time even for these simple functions, unlike the case of polynomials. We also derive a bound on the bit-precision used by the algorithm. To the best of our knowledge, this is the first such result for holomorphic functions. We introduce new geometric parameters, such as the relative growth of the function on the input box, for analyzing the algorithm. Thus, our estimates naturally generalize the known results, i.e., for the case of polynomials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A linear time algorithm for the r-gathering problem on the line.
- Author
-
Sarker, Anik, Sung, Wing-Kin, and Sohel Rahman, M.
- Subjects
- *
PROBLEM solving , *ALGORITHMS - Abstract
In this paper, we revisit the r -gathering problem. Given sets C and F of points on the plane and distance d (c , f) for each c ∈ C and f ∈ F , an r -gathering of C to F is an assignment A of C to open facilities F ′ ⊆ F such that r or more members of C are assigned to each open facility. The cost of an r -gathering is max c ∈ C d (c , A (c)). The r -gathering problem computes the r -gathering minimizing the cost. In this paper we study the r -gathering problem when C and F are on a line and present a O (| C | + | F |) -time algorithm to solve the problem. Our solution is optimal since any algorithm needs to read C and F at least once. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Computing the multi-string BWT and LCP array in external memory.
- Author
-
Bonizzoni, Paola, Della Vedova, Gianluca, Pirola, Yuri, Previtali, Marco, and Rizzi, Raffaella
- Subjects
- *
MEMORY , *SUFFIXES & prefixes (Grammar) , *GENERALIZATION , *ALGORITHMS , *GENOMES - Abstract
Indexing very large collections of strings, such as those produced by the widespread next generation sequencing technologies, heavily relies on multi-string generalization of the Burrows–Wheeler Transform (BWT): large requirements of in-memory approaches have stimulated recent developments on external memory algorithms. The related problem of computing the Longest Common Prefix (LCP) array of a set of strings is instrumental to compute the suffix-prefix overlaps among strings, which is an essential step for many genome assembly algorithms. In a previous paper, we presented an in-memory divide-and-conquer method for building the BWT and LCP where we merge partial BWTs with a forward approach to sort suffixes. In this paper, we propose an alternative backward strategy to develop an external memory method to simultaneously build the BWT and the LCP array on a collection of m strings of different lengths. The algorithm over a set of strings having constant length k has O (m k l) time and I/O volume, using O (k + m) main memory, where l is the maximum value in the LCP array. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Range partitioning within sublinear time: Algorithms and lower bounds.
- Author
-
Ning, Baoling, Li, Jianzhong, and Jiang, Shouxu
- Subjects
- *
PARALLEL algorithms , *LINEAR orderings , *COMPUTING platforms , *ALGORITHMS , *DATA distribution - Abstract
• A lower bound for sampling cost of the range partitioning algorithm in the RAM model is proved. • Two lower bounds for sublinear external range partitioning algorithms are proved. • A sublinear external range partitioning algorithm for a novel data distribution is designed. Range partitioning is a typical and mostly used data partitioning method and has became a core operation in most of big data computing platforms. Given an input L of N data items admitting a total order, the goal of range partitioning is to divide the whole input into k ranges containing the same number of data items. There is a trivial lower bound Ω (N) for the exact partitioning algorithms, since they need to at least make a full scan of the whole data. In the context of big data computing, even algorithms with O (N) time are not always thought to be efficient enough, the ultimate goal of designing algorithms on big data is usually to solve problems within sublinear time. Therefore, it is well motivated and important to study sublinear algorithms for the range partitioning problem. The paper aims to answer three questions. For the internal memory (RAM) model, since sophisticated sampling based (ϵ , δ) -approximation partitioning algorithm with O (k log (N / δ) ϵ 2 ) time cost has been proposed, the first question is what a lower bound we can obtain for sublinear partitioning algorithms. For the external memory (I/O) model, as far as we know, no previous works give external partitioning algorithms with performance guarantee within sublinear time, therefore the two questions are what the upper bound and the lower bound we can achieve for sublinear external partitioning algorithms. To answer the above questions, based on the RAM and I/O model, the paper studies the lower and upper bounds for the range partitioning problem. For the RAM model, a lower bound Ω (k (1 − δ) ϵ 2 ) for the cost of sampling based partitioning algorithms is proved. For the I/O model, two lower bounds of the sampling cost required by sublinear external range partitioning algorithms are proved, which indicate that at least a full scan of the whole input is needed in the worst case and a general sublinear external partitioning algorithm does not exist. Motivated by the hard instances utilized in the proof of lower bounds, a model for describing the input distributions of the range partitioning problem in practical applications is proposed. Finally, for the special cases described by the model, a sublinear external partitioning algorithm with O (k log (N / δ) w B ϵ 2 ) I/O cost is designed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. A sub-linear time algorithm for approximating k-nearest-neighbor with full quality guarantee.
- Author
-
Ma, Hengzhao and Li, Jianzhong
- Subjects
- *
ALGORITHMS , *SURETYSHIP & guaranty , *TIME , *DISTANCES - Abstract
• We propose an algorithm that unifies the distance and recall criteria for the approximate k-Nearest-Neighbor problem. • The algorithm is the first one for the studied problem that provides theoretical guarantee on both approximation criteria. • The algorithm is the first one that achieves sub-linear query time while providing theoretical guarantees. In this paper we propose an algorithm for the approximate k-Nearest-Neighbors problem. According to the existing researches, there are two kinds of approximation criteria. One is the distance criterion, and the other is the recall criterion. All former algorithms suffer the problem that there are no theoretical guarantees for the two approximation criteria. The algorithm proposed in this paper unifies the two kinds of approximation criteria, and has full theoretical guarantees. Furthermore, the query time of the algorithm is sub-linear. As far as we know, it is the first algorithm that achieves both sub-linear query time and full theoretical approximation guarantee. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. How hard is safe bribery?
- Author
-
Karia, Neel, Mallick, Faraaz, and Dey, Palash
- Subjects
- *
BRIBERY , *SOCIAL choice , *SOCIAL problems - Abstract
Bribery in an election is one of the well-studied control problems in computational social choice. In this paper, we propose and study the safe bribery problem. Here the goal of the briber is to ask the bribed voters to vote in such a way that the briber never prefers the original winner (of the unbribed election) more than the new winner, even if the bribed voters do not fully follow the briber's advice. Indeed, in many applications of bribery, campaigning for example, the briber often has limited control on whether the bribed voters eventually follow her recommendation and thus it is conceivable that the bribed voters can either partially or fully ignore the briber's recommendation. We provide a comprehensive complexity theoretic landscape of the safe bribery problem for many common voting rules in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. A linear time algorithm for connected p-centdian problem on block graphs.
- Author
-
Nguyen, Kien Trung, Teh, Wen Chean, Hung, Nguyen Thanh, and Nguyen-Thu, Huong
- Subjects
- *
NP-complete problems , *CHARTS, diagrams, etc. , *ALGORITHMS - Abstract
We study the problem of finding a set of p connected vertices in a block graph to minimize the convex combination of the median and the center objective functions. This problem is called the connected p -centdian problem on block graphs. In the scope of this paper, we always focus on unweighted block graphs. For block graphs with two different edge lengths, the problem is NP-complete. For block graphs with uniform edge lengths, we focus on a special case of the connected p-centdian problem, namely where the median and the center functions receive equal weight. Concerning this specific problem, we consider a lexicographic order based on certain labels of the vertices and prove that there exists a connected p -centdian that contains a predetermined 1-centdian on the underlying graph. Applying this result, we develop a linear time algorithm for the problem. Finally, the problem under the general convex combination of the median and the center functions is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Better guarantees for k-median with service installation costs.
- Author
-
Zhang, Zhen, Zhou, Yipeng, and Yu, Shaoqian
- Subjects
- *
APPROXIMATION algorithms , *COST , *ALGORITHMS - Abstract
k-median with service installation costs is a frequently encountered problem in applications involving multiple service requirements, which generalizes the standard k -median problem in that each client is associated with a specific service and can only be connected to the facilities where the service is installed. In this paper, we give a (13.349 + ϵ) -approximation algorithm for the k -median with service installation costs problem, improving upon the previous best approximation ratio of 18. We propose a new deterministic rounding approach to deal with the challenges caused by the service requirements, which is the crucial step in getting the improved ratio. • We give a (13.349 + ϵ) -approximation algorithm for the k -median with service installation costs problem. • We obtain a well-behaved fractional solution to the problem based on the framework of Lagrangian relaxation. • We give a new deterministic rounding approach to deal with the challenges caused by the service requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. A method to calculate the number of spanning connected unicyclic(bicyclic) subgraphs in 2-separable networks.
- Author
-
Liang, Jing, Zhao, Haixing, and Yin, Jun
- Subjects
- *
ENTROPY , *ALGORITHMS - Abstract
The numbers of spanning connected unicyclic subgraphs and spanning connected bicyclic subgraphs are essential parameters in studying network reliability. However, these two parameters have hardly been studied. Only in 1983, Liu studied the enumeration of spanning connected unicyclic(bicyclic) subgraphs, which is mainly calculated by the determinant of the cycle adjacency matrix. But for large networks, the calculation of the determinant of cycle adjacency matrix is extremely complex. In this paper, we present a reduction method to compute the number of spanning connected unicyclic(bicyclic) subgraphs of 2-separable networks. As an application, we enumerate the number of spanning connected unicyclic(bicyclic) subgraphs in fractal scale-free networks by linear algorithm. The exact expression of the number of spanning connected unicyclic(bicyclic) subgraphs is obtained. On this basis, the entropy of the number of spanning connected unicyclic(bicyclic) subgraphs is determined. This research provides helpful guidance for computing the number of spanning connected unicyclic(bicyclic) subgraphs of general 2-connected networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. An efficient algorithm for the longest common palindromic subsequence problem.
- Author
-
Liang, Ting-Wei, Yang, Chang-Biau, and Huang, Kuo-Si
- Subjects
- *
ALGORITHMS , *DYNAMIC programming , *PROBLEM solving , *PALINDROMES , *POLYNOMIAL time algorithms - Abstract
The longest common palindromic subsequence (LCPS) problem is a variant of the longest common subsequence (LCS) problem. Given two input sequences A and B , the LCPS problem is to find the common subsequence of A and B such that the answer is a palindrome with the maximal length. The LCPS problem was first proposed by Chowdhury et al. (2014) [9] , who proposed a dynamic programming algorithm with O (m 2 n 2) time, where m and n denote the lengths of A and B , respectively. This paper proposes a diagonal-based algorithm for solving the LCPS problem with O (L (m − L) R log n) time and O (R L) space, where R denotes the number of match pairs between A and B , and L denotes the LCPS length. In our algorithm, the 3-dimensional minima finding algorithm is invoked to overcome the domination problem. As experimental results show, our algorithm is efficient practically compared with some previously published algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Connectivity and constructive algorithms of disjoint paths in dragonfly networks.
- Author
-
Wu, Suying, Fan, Jianxi, Cheng, Baolei, Yu, Jia, and Wang, Yan
- Subjects
- *
DRAGONFLIES , *HIGH performance computing , *ALGORITHMS - Abstract
Dragonfly networks have been widely used in the current High Performance Computing (HPC) computers due to lower global network diameter and other advantages of communication performance such as modularity and cost-effectiveness. The original definition of the dragonfly network was very loose on account of its uncertain and diversified global link arrangements. In this paper, we study the logical structure of the dragonfly network which can be treated as a compound graph of complete graphs. Firstly, we give the general definition of the dragonfly network, named D F (n , h , g) , and the specific definition of the dragonfly network under the relative global link arrangement, named D (n , h). Then, we prove that the connectivity of D (n , h) is n − 1 + h. In the end, we propose an O (n) algorithm to give the disjoint path between any two distinct vertices in D (n , h) and analyze the maximum length of these disjoint paths which is no more than 7. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. On the average-case complexity of pattern matching with wildcards.
- Author
-
Barton, Carl
- Subjects
- *
HUFFMAN codes , *ALGORITHMS - Abstract
Pattern matching with wildcards is a string matching problem with the goal of finding all factors of a text t of length n that match a pattern x of length m , where wildcards (characters that match everything) may be present. In this paper we present a number of complexity results and fast average-case algorithms for pattern matching where wildcards are allowed in the pattern, however, the results are easily adapted to the case where wildcards are allowed in the text as well. We analyse the average-case complexity of these algorithms and derive non-trivial time bounds. These are the first results on the average-case complexity of pattern matching with wildcards which provide a provable separation in time complexity between exact pattern matching and pattern matching with wildcards. We introduce the wc-period of a string which is the period of the binary mask x b where x b [ i ] = a iff x [ i ] ≠ ϕ and b otherwise. We denote the length of the wc-period of a string x by ▪. We show the following results for constant 0 < ϵ < 1 and a pattern x of length m and g wildcards with ▪ the prefix of length p contains g p wildcards: • If lim m → ∞ g p p = 0 there is an optimal algorithm running in O (n log σ m m) -time on average. • If lim m → ∞ g p p = 1 − ϵ there is an algorithm running in O (n log σ m log 2 p m) -time on average. • If lim m → ∞ g m = lim m → ∞ 1 − f (m) = 1 any algorithm takes at least Ω (n log σ m f (m)) -time on average. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Faster algorithm for pathwidth one vertex deletion.
- Author
-
Tsur, Dekel
- Subjects
- *
ALGORITHMS , *GRAPH algorithms - Abstract
In the Pathwidth One Vertex Deletion (POVD) problem the input is a graph G and an integer k , and the goal is to decide whether there is a set of at most k vertices whose removal from G results in a graph with pathwidth at most 1. In this paper we give an algorithm for POVD whose running time is O ⁎ (3.888 k). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Fragile complexity of adaptive algorithms.
- Author
-
Bose, Prosenjit, Cano, Pilar, Fagerberg, Rolf, Iacono, John, Jacob, Riko, and Langerman, Stefan
- Subjects
- *
ALGORITHMS - Abstract
The fragile complexity of a comparison-based algorithm is f (n) if each input element participates in O (f (n)) comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e., algorithms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity Θ (log k) , where k is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the k th smallest element has expected fragile complexity O (log log k) for the element selected. Deterministically finding the minimum element has fragile complexity Θ (log (Inv)) and Θ (log (Runs)) , where Inv is the number of inversions in a sequence and Runs is the number of increasing runs in a sequence. Deterministically finding the median has fragile complexity O (log (Runs) + log log n) and Θ (log (Inv)). Deterministic sorting has fragile complexity Θ (log (Inv)) but it has fragile complexity Θ (log n) regardless of the number of runs. • Fragile complexity parameterized by other quantity than the input size. • Predecessor has fragile complexity Θ (log k). • Randomized selection has expected fragile complexity O (log log k). • Minimum has fragile complexity Θ (log (Inv)) and Θ (log (Runs)). • Median has fragile complexity Θ (log (Inv)) and O (log (Runs) + log log n). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. In-place initializable arrays.
- Author
-
Katoh, Takashi and Goto, Keisuke
- Subjects
- *
DATA structures , *ALGORITHMS - Abstract
An initializable array is an array that supports the read and write operations for any element and the initialization of the entire array. This paper proposes a simple in-place algorithm to implement an initializable array of length N containing ℓ ∈ O (w) bits entries in N ℓ + 1 bits on the word RAM model with w bits word size, i.e., the proposed array requires only 1 extra bit on top of a normal array of length N containing ℓ bits entries. Our algorithm supports the all three operations in constant worst-case time, that is, it runs in-place using at most constant number of words O (w) bits during each operation. The time and space complexities are optimal since it was already proven that there is no implementation of an initializable array with no extra bit supporting all the operations in constant worst-case time (Hagerup and Kammer, 2017 [12]). Our algorithm significantly improves upon the best algorithm presented in the earlier studies (Navarro, 2014 [5]) which uses N + o (N) extra bits to support all the operations in constant worst-case time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. The impact of the Gabriel subgraph of the visibility graph on the gathering of mobile autonomous robots.
- Author
-
Li, Shouwei, Meyer auf der Heide, Friedhelm, and Podlipyan, Pavel
- Subjects
- *
AUTONOMOUS robots , *MOBILE robots , *ALGORITHMS , *SPEED limits , *VISIBILITY - Abstract
In this paper, we reconsider the well-known discrete, round-based Go-To-The-Center algorithm due to Ando, Suzuki, and Yamashita [2] for gathering n autonomous mobile robots with limited viewing range in the plane. Remarquably, this algorithm exploits the fact that during its execution, many collisions of robots occur. Such collisions are interpreted as a success because it is assumed that such collided robots behave the same from now on. This is acceptable under the assumption that each robot is represented by a single point. Otherwise, collisions should be avoided. In this paper, we consider a continuous Go-To-The-Center algorithm in which the robots continuously observe the positions of their neighbors and adapt their speed (assuming a speed limit) and direction. Our first results are time bounds of O (n 2) for gathering in two dimensions Euclidean space, and Θ (n) for the one dimension. Our main contribution is the introduction and evaluation of a continuous algorithm which performs Go-To-The-Center considering only the neighbors of a robot with respect to the Gabriel subgraph of the visibility graph, i.e. Go-To-The-Gabriel-Center algorithm. We show that this modification still correctly executes gathering in one and two dimensions, with the same time bounds as above. Simulations exhibit a severe difference of the behavior of the Go-To-The-Center and the Go-To-The-Gabriel-Center algorithms: Whereas lots of collisions occur during a run of the Go-To-The-Center algorithm, typically only one, namely the final collision occurs during a run of the Go-To-The-Gabriel-Center algorithm. We can prove this "collisionless property" of the Go-To-The-Gabriel-Center algorithm for one dimension. In two-dimensional Euclidean space, we conjecture that the "collisionless property" holds for almost every initial configuration. We support our conjecture with measurements obtained from the simulation where robots execute both continuous Go-To-The-Center and Go-To-The-Gabriel-Center algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
50. Approximation of set multi-cover via hypergraph matching.
- Author
-
Gorgi, Abbass, El Ouali, Mourad, Srivastav, Anand, and Hachimi, Mohamed
- Subjects
- *
HYPERGRAPHS , *APPROXIMATION algorithms , *ALGORITHMS , *LOGICAL prediction - Abstract
For a given b ∈ N n and a hypergraph H = (V , E) with maximum degree Δ, the b -set multicover problem which we are concerned with in this paper may be stated as follows: find a minimum cardinality subset C ⊆ E such that no vertex v i ∈ V is contained in less than b i hyperedges of C. Peleg, Schechtman, and Wool (1997) conjectured that for any fixed Δ and b = min i b i , the problem cannot be approximated with a ratio strictly smaller than δ : = Δ − b + 1 , unless P = NP. In this paper, we show that the conjecture of Peleg et al. is not true on ℓ -uniform hypergraphs by presenting a polynomial-time approximation algorithm based on a matching/covering duality for hypergraphs due to Ray-Chaudhuri (1960), which we convert into an approximative form. The given algorithm yields a ratio smaller than (1 − b (ℓ + 1) Δ) Δ b. Moreover, we prove that the lower bound conjectured by Peleg et al. holds for regular hypergraphs under the unique games conjecture. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.