105 results
Search Results
2. On the probe problem for (r,ℓ)-well-coveredness: Algorithms and complexity.
- Author
-
Faria, Luerbio and Souza, Uéverton S.
- Subjects
- *
INDEPENDENT sets , *CHARTS, diagrams, etc. , *ALGORITHMS - Abstract
• C -probe graphs are graphs having stable sets N for which edges can be added to obtain a supergraph in C. • A graph G is an (r , ℓ) -graph if V (G) can be partitioned into r independent sets and ℓ cliques. • A graph G is well-covered if every maximal independent set is also maximum. • This paper studies the complexity of recognizing probe graphs for the class of (r , ℓ) -well-covered graphs. Let C be a class of graphs. A graph G = (V , E) is C -probe if V (G) can be partitioned into two sets: non-probes N and probes P , where N is an independent set and new edges may be added between some non-probe vertices such that the resulting graph is in the class C. In this case, we say that (N , P) is a C - probe partition of G. In the Unpartitioned Probe problem for a graph class C we are given a graph G and asked whether G has a C -probe partition, i.e., such a problem consist of recognizing the class of C -probe graphs. A graph G = (V , E) is an (r , ℓ) -graph when V can be partitioned into (S 1 , S 2 , ... , S r , K 1 , K 2 , ... , K ℓ) such that S 1 , S 2 , ... , S r are independent sets, and K 1 , K 2 , ... , K ℓ are cliques. A graph G is well-covered if every maximal independent set is also maximum, and it is (r , ℓ) -well-covered if it is well-covered as well as an (r , ℓ) -graph. In this paper, we study the complexity of the Unpartitioned Probe problem for the class of (r , ℓ) -well-covered graphs. We classify all but the (2 , 0) and (1 , 2) cases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. New algorithms for fair k-center problem with outliers and capacity constraints.
- Author
-
Wu, Xiaoliang, Feng, Qilong, Xu, Jinhui, and Wang, Jianxin
- Subjects
- *
APPROXIMATION algorithms , *POLYNOMIAL approximation , *METRIC spaces , *POLYNOMIAL time algorithms , *ALGORITHMS - Abstract
The fair k -center problem has been paid lots of attention recently. In the fair k -center problem, we are given a set X of points in a metric space and a parameter k ∈ Z + , where the points in X are divided into several groups, and each point is assigned a color to denote which group it is in. The goal is to partition X into k clusters such that the number of cluster centers with each color is equal to a given value, and the k -center problem objective is minimized. In this paper, we consider the fair k -center problem with outliers and capacity constraints, denoted as the fair k -center with outliers (F k CO) problem and the capacitated fair k -center (CF k C) problem, respectively. The outliers constraints allow up to z outliers to be discarded when computing the objective function, while the capacity constraints require that each cluster has size no more than L. In this paper, we design an Fixed-Parameter Tractability (FPT) approximation algorithm and a polynomial approximation algorithm for the above two problems. In particular, our algorithms give (1 + ϵ) -approximations with FPT time for the F k CO and CF k C problems in doubling metric space. Moreover, we also propose a 3-approximation algorithm in polynomial time for the F k CO problem with some reasonable assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Formal verification of parallel prefix sum and stream compaction algorithms in CUDA.
- Author
-
Safari, Mohsen and Huisman, Marieke
- Subjects
- *
COMPACTING , *PARALLEL algorithms , *SUFFIXES & prefixes (Grammar) , *HIGH performance computing , *ROCK glaciers , *ALGORITHMS - Abstract
• We add CUDA verification support to the VerCors verifier. • We verify data race-freedom and functional correctness of CUDA implementations of two parallel prefix sum algorithms. • We verify data race-freedom and functional correctness of CUDA implementations of stream compaction algorithm. GPUs are an important part of any High Performance Computing (HPC) architecture. To make optimal use of the specifics of a GPU architecture, we need programming models that naturally support the parallel execution model of a GPU. CUDA and OpenCL are two widely used examples of such programming models. Furthermore, we also need to redesign algorithms such that they adhere to this parallel programming model, and we need to be able to prove the correctness of these redesigned algorithms. In this paper we study two examples of such parallelized algorithms, and we discuss how to prove their correctness (data race freedom and (partial) functional correctness) using the VerCors program verifier. First of all, we prove the correctness of two parallel algorithms solving the prefix sum problem. Second, we show how such a prefix sum algorithm is used as a basic block in a stream compaction algorithm, and we prove correctness of this stream compaction algorithm, taking advantage of the earlier correctness proof for the prefix sum algorithm. The proofs as described in this paper are developed over the CUDA implementations of these algorithms. In earlier work, we had already shown correctness of a more high-level version of the algorithm. This paper discusses how we add support to reason about CUDA programs in VerCors, and it then shows how we can redo the verification at the level of the CUDA code. We also discuss some practical challenges that we had to address to prove correctness of the actual CUDA-level verifications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Algorithms for diameters of unicycle graphs and diameter-optimally augmenting trees.
- Author
-
Wang, Haitao and Zhao, Yiming
- Subjects
- *
DIAMETER , *PROBLEM solving , *ALGORITHMS , *TREES , *SAWLOGS - Abstract
We consider the problem of computing the diameter of a unicycle graph (i.e., a graph with a unique cycle). We present an O (n) time algorithm for the problem, where n is the number of vertices of the graph. This improves the previous best O (n log n) time solution [Oh and Ahn, ISAAC 2016]. Using this algorithm as a subroutine, we solve the problem of adding a shortcut to a tree so that the diameter of the new graph (which is a unicycle graph) is minimized; our algorithm takes O (n 2 log n) time and O (n) space. The previous best algorithms solve the problem in O (n 2 log 3 n) time and O (n) space [Oh and Ahn, ISAAC 2016], or in O (n 2) time and O (n 2) space [Bilò, ISAAC 2018]. • The paper presents a linear time algorithm for computing the diameter of unicycle graphs. • The paper gives a new algorithm which finds a new edge to add to a tree so that the diameter of the new graph is minimized. • The algorithms rely on certain domination relations among edges of the graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. The Steiner k-eccentricity on trees.
- Author
-
Li, Xingfu, Yu, Guihai, Klavžar, Sandi, Hu, Jie, and Li, Bo
- Subjects
- *
TREES , *ALGORITHMS , *GRAPH algorithms - Abstract
We study the Steiner k -eccentricity on trees, which generalizes the previous one in the paper [On the average Steiner 3-eccentricity of trees, arXiv:2005.10319 ]. We achieve much stronger properties for the Steiner k -ecc tree than that in the previous paper. Based on this, a linear time algorithm is devised to calculate the Steiner k -eccentricity of a vertex in a tree. On the other hand, lower and upper bounds of the average Steiner k -eccentricity index of a tree on order n are established based on a novel technique which is quite different and also much easier to follow than the earlier one. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. How hard is safe bribery?
- Author
-
Karia, Neel, Mallick, Faraaz, and Dey, Palash
- Subjects
- *
BRIBERY , *SOCIAL choice , *SOCIAL problems - Abstract
Bribery in an election is one of the well-studied control problems in computational social choice. In this paper, we propose and study the safe bribery problem. Here the goal of the briber is to ask the bribed voters to vote in such a way that the briber never prefers the original winner (of the unbribed election) more than the new winner, even if the bribed voters do not fully follow the briber's advice. Indeed, in many applications of bribery, campaigning for example, the briber often has limited control on whether the bribed voters eventually follow her recommendation and thus it is conceivable that the bribed voters can either partially or fully ignore the briber's recommendation. We provide a comprehensive complexity theoretic landscape of the safe bribery problem for many common voting rules in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Order based algorithms for the core maintenance problem on edge-weighted graphs.
- Author
-
Zhang, Feiteng, Liu, Bin, Liu, Zhenming, and Fang, Qizhi
- Subjects
- *
ALGORITHMS , *INTEGERS , *WEIGHTED graphs - Abstract
The cohesive subgraph k-core is a maximal connected subgraph with the minimum degree δ ≥ k of a simple graph, where integer k ≥ 0. Define the core number of a vertex w as the maximum k such that w is contained in a k -core. The core decomposition problem which is calculating the core numbers of all vertices in static graphs, and the core maintenance problem which is updating the core numbers in dynamic graphs are our main concern. Although, core numbers can be updated by the core decomposition algorithms, only a small part of vertices' core numbers have changed after the change of a graph. Thus, it is necessary to update core numbers locally to reduce the cost. In this paper, we study the core maintenance problem on edge-weighted graphs by using the vertex sequence k-order which is ordered by the order that the core decomposition algorithm removes vertices. We design the core maintenance algorithms for inserting one edge at a time and the method of updating the k -order, which reduce the searching range and the time cost evidently. For the removing case, we use the existing subcore algorithm to do the core maintenance and modify it with the method of updating k -order we design. Finally, we do extensive experiments to evaluate the effectiveness and the efficiency of our algorithms, which shows that the order based algorithm has a better performance than the existing for the core maintenance on edge-weighted graphs. • We solve the core maintenance problem on edge-weighted graphs with a tool named weighted k -order. • The weighted k -order, a vertex sequence, is defined on edge-weighted graphs for the first time. • We give the properties of vertices whose core numbers increase on the k-order for inserting case on edge-weighted graphs. • The order based core maintenance algorithm for inserting case on edge-weighted graphs is given, which is the best. • We give the methods to update the k-order after inserting or removing one edge for the next core maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
9. Compact suffix automata representations for searching long patterns.
- Author
-
Faro, Simone and Scafiti, Stefano
- Subjects
- *
SUFFIXES & prefixes (Grammar) , *ROBOTS , *ALGORITHMS - Abstract
Automata based algorithms have always dominated the history of string matching, thanks to their intrinsic nature of recognizers. Among them, those based on the suffix automaton deserve a special mention, because its use has always led to elegant and very efficient solutions in practice. In this paper, we present two novel string matching algorithms based on the suffix automaton which are particularly suitable for very long strings. The first algorithm simulates the suffix automaton through the well known bit-parallelism approach, while the second makes use of an alternative non-standard simulation. We mainly focus on the second solution, which turns out to be more flexible in practice, and present some extensions of it. Experimental results suggest that both our solutions are very efficient in practical cases scaling much better when the length of the pattern increases. Moreover, our second approach is also competitive with the most effective solutions available for the exact string matching problem. • We present two novel suffix automata based algorithms specifically designed for very long strings. • We describe a generic non-standard approach for simulating the suffix automaton of a string. • Our first solution is a bit-parallel based extension of the BNDM algorithm which can be encoded using few bits. • Our second solution is related to the occurrence of unique factors in the pattern. • Experimental results suggest that both our solutions are very efficient in practical cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. Faster parameterized algorithms for two vertex deletion problems.
- Author
-
Tsur, Dekel
- Subjects
- *
ALGORITHMS , *GRAPH algorithms , *INTEGERS - Abstract
In the l -Path Vertex Cover problem (resp., the l -Component Order Connectivity problem) the input is an undirected graph G and an integer k. The goal is to decide whether there is a set of vertices of size at most k whose deletion from G results in a graph that does not contain a path with l vertices (resp., does not contain a connected component with at least l vertices). In this paper we give a parameterized algorithm for l -Path Vertex Cover when l = 5 , 6 , 7 , whose running times are O ⁎ (3.945 k) , O ⁎ (4.947 k) , and O ⁎ (5.951 k) , respectively. We also give an algorithm for l -Component Order Connectivity whose running time is O ⁎ ((l − 1 − ϵ l) k) for every l ≥ 4 , where ϵ l > 0 for every l. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. How majority-vote crossover and estimation-of-distribution algorithms cope with fitness valleys.
- Author
-
Witt, Carsten
- Subjects
- *
MATHEMATICAL analysis , *ALGORITHMS , *EVOLUTIONARY computation , *VALLEYS , *EVOLUTIONARY algorithms - Abstract
The benefits of using crossover in crossing fitness gaps have been studied extensively in evolutionary computation. Recent runtime results show that majority-vote crossover is particularly efficient at optimizing the well-known Jump benchmark function that includes a fitness gap next to the global optimum. Also estimation-of-distribution algorithms (EDAs), which use an implicit crossover, are much more efficient on Jump than typical mutation-based algorithms. However, the allowed gap size for polynomial runtimes with EDAs is at most logarithmic in the problem dimension n. In this paper, we investigate variants of the Jump function where the gap is shifted and appears in the middle of the typical search trajectory. Such gaps can still be overcome efficiently in time O (n log n) by majority-vote crossover and an estimation-of-distribution algorithm, even for gap sizes almost n. However, if the global optimum is located in the gap instead of the usual all-ones string, majority-vote crossover would nevertheless approach the all-ones string and be highly inefficient. In sharp contrast, an EDA can still find such a shifted optimum efficiently. Thanks to a general property called fair sampling , the EDA will with high probability sample from almost every fitness level of the function, including levels in the gap, and sample the global optimum even though the overall search trajectory points towards the all-ones string. Finally, we derive limits on the gap size allowing efficient runtimes for the EDA. • We study 2 algorithms using majority-vote crossover and an estimation-of-distribution algorithm on modified jump functions. • We derive theorems on the algorithms' runtime using rigorous mathematical analyses. • All 3 algorithms can overcome the fitness gap of the jump functions efficiently for moderate sizes of the gap. • All but the estimation-of-distribution algorithm usually fail to find an optimum located within the gap. • The estimation-of-distribution is efficient since it samples fairly on all fitness levels towards the optimum. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. Execution trace sets for real computation.
- Author
-
Thompson, Declan
- Subjects
- *
ALGORITHMS , *MACHINERY - Abstract
• Analyses general algorithms as sets of sequences (tasks) over a domain. • Defines an equivalence relation on sets of sequences to recover algorithmic tasks. • Characterises the execution trace sets of Blum-Shub-Smale machines. • Compares trace-based approaches to algorithms to iterator-based accounts. Traditional models of computation and algorithms take a constructive approach, essentially considering a transition function over a given state space. Yet algorithms may also be considered in terms of the observed behaviour they generate. This paper marries these perspectives with a domain-independent, trace based account of computational and algorithmic processes. We focus on the model of real computation due to Blum, Shub and Smale [3] and provide a precise characterisation of the execution trace sets generated by their machines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Tight competitive analyses of online car-sharing problems.
- Author
-
Liang, Ya-Chun, Lai, Kuan-Yun, Chen, Ho-Lin, Iwama, Kazuo, and Liao, Chung-Shou
- Subjects
- *
CAR sharing , *ALGORITHMS , *AUTOMOBILES - Abstract
The online car-sharing problem finds many real-world applications. The problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called k S2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2 for k = 2 and 1.5 for a restricted value of k , i.e., a multiple of three. In this paper, we remove all the holes of unknown CR's; namely we prove that the CR is 2 k k + ⌊ k / 3 ⌋ for all k ≥ 2. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even further; precisely we can achieve a CR of 2 + R 3 if the number of requests in each stage is at most Rk , 1 ≤ R ≤ 2 , where we do not have to know the value of R in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR's, and prove some lower bounds to show the tightness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Streaming algorithms for monotone non-submodular function maximization under a knapsack constraint on the integer lattice.
- Author
-
Tan, Jingjing, Wang, Fengmin, Ye, Weina, Zhang, Xiaoqing, and Zhou, Yang
- Subjects
- *
INTEGERS , *MAXIMAL functions , *BACKPACKS , *ALGORITHMS , *EXPECTATION-maximization algorithms - Abstract
The study of non-submodular maximization on the integer lattice is an important extension of submodular optimization. In this paper, streaming algorithms for maximizing non-negative monotone non-submodular functions with knapsack constraint on integer lattice are considered. We first design a two-pass StreamingKnapsack algorithm combining with BinarySearch as a subroutine for this problem. By introducing the DR ratio γ and the weak DR ratio γ w of the non-submodular objective function, we obtain that the approximation ratio is min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } , the total memory complexity is O (K log K / ε) , and the total query complexity for each element is O (log K log (K / ε 2) / ε). Then, we design a one-pass streaming algorithm by dynamically updating the maximal function value among unit vectors along with the currently arriving element. Finally, in order to decrease the memory complexity, we design an improved StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). • A two-pass streaming algorithm is proposed for maximizing monotone non-submodular functions with knapsack constraint on integer lattice. • By introducing DR ratio γ d (weak DR ratio γ w) we obtain the approximation ratio as min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } with memory O (K log K / ε). • A one-pass streaming algorithm is designed by dynamically updating the maximal function value among unit vectors. • We design an improve StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. MUL-tree pruning for consistency and optimal reconciliation - complexity and algorithms.
- Author
-
Gascon, Mathieu, Dondi, Riccardo, and El-Mabrouk, Nadia
- Subjects
- *
RECONCILIATION , *TREE pruning , *GENETIC techniques , *ALGORITHMS , *GENE families , *PHYLOGENY - Abstract
A multi-labeled tree (or MUL-tree) is a rooted tree in which every leaf is labeled by an element from some set X , but in which more than one leaf may be labeled by the same element of X. MUL-trees have applications in many fields. In phylogenetics, they can represent the evolution of gene families, where genes are represented by the species they belong to, the non-uniqueness of leaf-labels coming from the fact that a given genome may contain many paralogous genes (genes related through duplication). In this paper, we consider two problems related to the leaf-pruning (leaf removal) of MUL-trees leading to single-labeled trees. First, given a set of MUL-trees, the MUL-tree Set Pruning for Consistency (MULSETPC) Problem asks for a pruning of each tree leading to a set of consistent trees, i.e. a collection of label-isomorphic single-labeled trees. Second, processing one tree at a time, the MUL-tree Pruning for Reconciliation (MULPR) Problem asks for a pruning that minimizes a "reconciliation" cost, i.e. an evolutionary cost of the gene family through duplication, given a known species tree. We show that MULSETPC is NP-hard and that MULPR is W[2]-hard when parameterized by the duplication cost, NP-hard when the number of occurrences of each label in a MUL-tree is bounded by two and that the optimization version of MULPR is hard to approximate within factor (1 − o (1)) ln | X |. We then develop a polynomial-time heuristic for MULPR and show its accuracy by comparing it to a brute-force exact method on a set of gene trees from the Ensembl Genome Browser. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Loosely-stabilizing maximal independent set algorithms with unreliable communications.
- Author
-
Dong, Rongcheng, Sudo, Yuichi, Izumi, Taisuke, and Masuzawa, Toshimitsu
- Subjects
- *
INDEPENDENT sets , *ALGORITHMS - Abstract
Self-stabilization is a promising paradigm for designing highly adaptive distributed systems. However, it cannot be realized when the communication between processes is unreliable with some constant probability. To circumvent such impossibility, this paper adopts the concept of loose-stabilization for the first time, which is a practical alternative of self-stabilization, and proposes three systematic approaches to realize loose-stabilization in the atomic-state model with probabilistically erroneous communications , namely, the redundant-state approach, the step-up approach, and the repetition approach. Further, we apply these approaches to design three corresponding loosely-stabilizing algorithms for the maximal independent set problem. • Self-stabilization solutions are precluded for most of non-trivial problems in probabilistic error models. • Loose-stabilization is a relaxed variant of self-stabilization and practically equivalent to it. • Loose-stabilization can circumvent the impossibilities of self-stabilization in probabilistic error models. • Adopt the concept of loose-stabilization in the atomic-state model for the first time. • Three loose-stabilizing algorithms for maximal independent set problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Smallest number of vertices in a 2-arc-strong digraph without good pairs.
- Author
-
Gu, Ran, Gutin, Gregory, Li, Shasha, Shi, Yongtang, and Taoqiu, Zhenyu
- Subjects
- *
PROBLEM solving , *ALGORITHMS - Abstract
Branchings play an important role in digraph theory and algorithms. In particular, a chapter in the monograph of Bang-Jensen and Gutin, Digraphs: Theory, Algorithms and Application, Ed. 2, 2009 is wholly devoted to branchings. The well-known Edmonds Branching Theorem provides a characterization for the existence of k arc-disjoint out-branchings rooted at the same vertex. A short proof of the theorem by Lovász (1976) leads to a polynomial-time algorithm for finding such out-branchings. A natural related problem is to characterize digraphs having an out-branching and an in-branching which are arc-disjoint. Such a pair of branchings is called a good pair. Bang-Jensen, Bessy, Havet and Yeo (2022) pointed out that it is NP-complete to decide if a given digraph has a good pair. They also showed that every digraph of independence number at most 2 and arc-connectivity at least 2 has a good pair, which settled a conjecture of Thomassen for digraphs of independence number 2. Then they asked for the smallest number n n g p of vertices in a 2-arc-strong digraph which has no good pair. They proved that 7 ≤ n n g p ≤ 10. In this paper, we prove that n n g p = 10 , which solves the open problem. • The study of good pair has relations with Tutte's Packing Theorem and Edmonds Branching Theorem. • We answer a problem by Bang-Jensen, Bessy, Havet and Yeo about the minimum order of 2-arc-strong digraphs without good pairs. • We study the relationship between Hamilton dipath and good pair. • We generalize some results by Bang-Jensen, Bessy, Havet and Yeo on good pair. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Fine-grained polynomial functional encryption.
- Author
-
Zhu, Ziqi, Gong, Junqing, Wang, Yuyu, and Qian, Haifeng
- Subjects
- *
POLYNOMIALS , *CRYPTOGRAPHY , *ALGORITHMS - Abstract
In this work, we present fine-grained secure polynomial functional encryption (PFE) for degree d ≥ 1 over a field F : a ciphertext encrypts x ∈ F n , a key is associated with a degree- d polynomial P and decryption recovers P (x) ∈ F. Fine-grained cryptographic primitives are secure against a resources bounded class of adversaries and computed by honest users with less resources than adversaries. In this paper, we construct the fine-grained PFE in these two fine-grained settings: (1) NC 1 PFE: Based on the worst-case assumption NC 1 ⊊ ⊕ L / poly , we construct public-key polynomial functional encryption and achieve (i) selective simulation-based security and (ii) static function-hiding against adversary in NC 1 where all honest algorithms are computable in AC 0 [ 2 ] and ciphertext sizes are O (n). (2) AC 0 PFE: We construct a private-key polynomial functional encryption achieve unconditionally selective simulation-based security against adversary in AC 0 where all honest algorithms are computable in AC 0 and ciphertext sizes are O (n). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Decision algorithms for reversibility of 1D cellular automata under reflective boundary conditions.
- Author
-
Ma, Junchi, Wang, Chen, Chen, Weilin, Lin, Defu, and Wang, Chao
- Subjects
- *
CELLULAR automata , *ALGORITHMS , *DECISION making - Abstract
Reversibility is one of the most significant properties of cellular automata (CA). In this paper, we focus on the reversibility of one-dimensional finite CA under reflective boundary conditions (RBC). We present two algorithms for deciding the reversibility of one-dimensional CA under RBC. Both algorithms work for not only linear rules but also non-linear rules. The first algorithm is to determine what we call the "strict reversibility" of CA. The second algorithm is to compute what we call the "reversibility function" of CA. Reversibility functions are proved to be periodic. Based on the algorithms, we list some experiment results of one-dimensional CA under RBC and analyse some features of this family of CA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Optimal algorithms for preemptive two-agent scheduling on uniform parallel machines.
- Author
-
Gu, Manzhan, Liu, Peihai, and Lu, Xiwen
- Subjects
- *
PRODUCTION scheduling , *INDEPENDENT sets , *SCHEDULING , *MACHINERY , *ALGORITHMS - Abstract
This paper studies two scheduling problems involving two agents A and B , where each agent has an independent set of jobs and all jobs are to be processed preemptively on uniform parallel machines. The aims of the two problems are to minimize the objective function value of agent A with the makespan of agent B not exceeding a threshold T. In the first problem, there are m machines and the objective function of agent A is the makespan. In the second problem, there are two machines and the objective function of agent A is the total completion time. For each of the two problems, we propose an optimal algorithm and obtain some extended results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Delaunay property and proximity results of the L-algorithm for digital plane probing.
- Author
-
Lu, Jui-Ting, Roussillon, Tristan, Lachaud, Jacques-Olivier, and Coeurjolly, David
- Subjects
- *
GEOMETRIC surfaces , *POINT set theory , *ALGORITHMS - Abstract
When processing the geometry of digital surfaces (boundaries of voxel sets), linear local structures such as pieces of digital planes play an important role. To capture such geometrical features, plane-probing algorithms have demonstrated their strength: starting from an initial triangle, the digital structure is locally probed to expand the triangle approximating the plane parameters more and more precisely (converging to the exact parameters for infinite digital planes). Among the different plane-probing algorithms, the L-algorithm is a plane-probing algorithm variant which takes into account a generally larger neighborhood of points for its update process. We show in this paper that this algorithm has the advantage to guarantee the so-called Delaunay property of the set of probing points, which has interesting consequences: it provides a minimal basis of the plane and guarantees an as-local-as-possible computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. An algorithm for the secure total domination problem in proper interval graphs.
- Author
-
Araki, Toru and Aita, Yasufumi
- Subjects
- *
POLYNOMIAL time algorithms , *DOMINATING set , *ALGORITHMS - Abstract
A subset S of vertices of G is a total dominating set if, for any vertex v , there is a vertex in S adjacent to v. A total dominating set S is a secure total dominating set if, for any vertex v ∉ S , there is a vertex u ∈ S such that uv is an edge and (S ∖ { u }) ∪ { v } is also a total dominating set. In this paper, we design an O (m) -time algorithm for computing a minimum secure total dominating set in a proper interval graph, where m is the number of edges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Optimal analysis for bandit learning in matching markets with serial dictatorship.
- Author
-
Wang, Zilong and Li, Shuai
- Subjects
- *
TIME perspective , *COMPUTER science , *ROBBERS , *REGRET , *ALGORITHMS - Abstract
The problem of two-sided matching markets is well-studied in computer science and economics, owing to its diverse applications across numerous domains. Since market participants are usually uncertain about their preferences in various online matching platforms, an emerging line of research is dedicated to the online setting where one-side participants (players) learn their unknown preferences through multiple rounds of interactions with the other side (arms). Sankararaman et al. [23] provide an Ω (N log (T) Δ 2 + K log (T) Δ) regret lower bound for this problem under serial dictatorship assumption, where N is the number of players, K (≥ N) is the number of arms, Δ is the minimum reward gap across players and arms, and T is the time horizon. Serial dictatorship assumes arms have the same preferences, which is common in reality when one side participants have a unified evaluation standard. Recently, the work of Kong and Li [10] proposes the ET-GS algorithm and achieves an O (K log (T) Δ 2 ) regret upper bound, which is the best upper bound attained so far. Nonetheless, a gap between the lower and upper bounds, ranging from N to K , persists. It remains unclear whether the lower bound or the upper bound needs to be improved. In this paper, we propose a multi-level successive selection algorithm that obtains an O (N log (T) Δ 2 + K log (T) Δ) regret bound when the market satisfies serial dictatorship. To the best of our knowledge, we are the first to propose an algorithm that matches the lower bound in the problem of matching markets with bandits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A deterministic approximation algorithm for metric triangle packing.
- Author
-
Zhao, Jingyang and Xiao, Mingyu
- Subjects
- *
APPROXIMATION algorithms , *DETERMINISTIC algorithms , *COMPLETE graphs , *ALGORITHMS , *TRIANGLES - Abstract
Given an edge-weighted metric complete graph with n vertices, the maximum weight metric triangle packing problem is to find a set of n / 3 vertex-disjoint triangles with the total weight of all triangles in the packing maximized. Several simple methods can lead to a 2/3-approximation ratio. However, this barrier is not easy to break. Chen et al. proposed a randomized approximation algorithm with an expected ratio of (0.66768 − ε) for any constant ε > 0. In this paper, we improve the approximation ratio to (0.66835 − ε). Furthermore, we can derandomize our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. New approximation algorithms for the heterogeneous weighted delivery problem.
- Author
-
Bilò, Davide, Gualà, Luciano, Leucci, Stefano, Proietti, Guido, and Rossi, Mirko
- Subjects
- *
APPROXIMATION algorithms , *WEIGHTED graphs , *POLYNOMIAL time algorithms , *ENERGY consumption , *ALGORITHMS - Abstract
• We study the heterogeneous weighted delivery (HWD) problem, which was introduced in [Bärtschi et al., STACS'17]. • We efficiently 8-approximate HWD with k = O (log n) agents, closing an open problem in [Bärtschi et al., ATMOS'17]. • We design a O (k) -approximation algorithm that runs in polynomial time regardless of the value of k. • We design a polynomial-time O ˜ (log 3 n) -approximation algorithm. • We show that the HWD problem is 36-approximable in polynomial time when each agent has one of two possible consumption rates. We study the heterogeneous weighted delivery (HWD) problem introduced in [Bärtschi et al., STACS'17] where k heterogeneous mobile agents (e.g., robots, vehicles, etc.), initially positioned on vertices of an n -vertex edge-weighted graph G , have to deliver m messages. Each message is initially placed on a source vertex of G and needs to be delivered to a target vertex of G. Each agent can move along the edges of G and carry at most one message at any time. Each agent has a rate of energy consumption per unit of traveled distance and the goal is that of delivering all messages using the minimum overall amount of energy. This problem has been shown to be NP-hard even when k = 1 , and is 4 ρ -approximable where ρ is the ratio between the maximum and minimum energy consumption of the agents. In this paper, we provide approximation algorithms with approximation ratios independent of the energy consumption rates. First, we design a polynomial-time 8-approximation algorithm for k = O (log n) , closing a problem left open in [Bärtschi et al., ATMOS'17]. This algorithm can be turned into an O (k) -approximation algorithm that always runs in polynomial-time, regardless of the values of k. Then, we show that HWD problem is 36-approximable in polynomial-time when each agent has one of two possible consumption rates. Finally, we design a polynomial-time O ˜ (log 3 n) -approximation algorithm for the general case. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. APX-hardness and approximation for the k-burning number problem.
- Author
-
Mondal, Debajyoti, Rajasingh, Angelin Jemima, Parthiban, N., and Rajasingh, Indra
- Subjects
- *
CONTAGION (Social psychology) , *NP-hard problems , *APPROXIMATION algorithms , *DIFFUSION processes , *INFORMATION processing , *GENERALIZATION , *ALGORITHMS - Abstract
Consider an information diffusion process on a graph G that starts with k > 0 burnt vertices, and at each subsequent step, burns the neighbors of the currently burnt vertices, as well as k other unburnt vertices. The k-burning number of G is the minimum number of steps b k (G) such that all the vertices can be burned within b k (G) steps. Note that the last step may have smaller than k unburnt vertices available, where all of them are burned. The 1-burning number coincides with the well-known burning number problem, which was proposed to model the spread of social contagion. The generalization to k -burning number allows us to examine different worst-case contagion scenarios by varying the spread factor k. In this paper we prove that computing k -burning number is APX-hard, for any fixed constant k. We then give an O ((n + m) log n) -time 3-approximation algorithm for computing k -burning number, for any k ≥ 1 , where n and m are the number of vertices and edges, respectively. Finally, we show that even if the burning sources are given as an input, computing a burning sequence itself is an NP-hard problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Almost optimal algorithms for diameter-optimally augmenting trees.
- Author
-
Bilò, Davide
- Subjects
- *
METRIC spaces , *TREE graphs , *TREES , *ALGORITHMS , *PROBLEM solving - Abstract
We consider the problem of augmenting an n -vertex tree with one shortcut in order to minimize the diameter of the resulting graph. The tree is embedded in an unknown space and we have access to an oracle that, when queried on a pair of vertices u and v , reports the weight of the shortcut (u , v) in constant time. Previously, the problem was solved in O (n 2 log 3 n) time for general weights [Oh and Ahn, ISAAC 2016], in O (n 2 log n) time for trees embedded in a metric space [Große et al. , Int. J. FCS], and in O (n log n) time for paths embedded in a metric space [Wang, Comput. Geom.]. Very recently an algorithm for general weights requiring O (n 2 log n) time and O (n) space has been developed [Wang and Zhao, Theor. Comp. Science]. Finally, a (1 + ε) -approximation algorithm running in O (n + 1 / ε 3) has been designed for paths embedded in R d , for constant values of d [Große et al. , Int. J. FCS]. In this paper we design an asymptotic time-optimal algorithm for trees and general edge-weights that requires O (n 2) time and O (n log n) space. Moreover, for trees embedded in a metric space, we design (i) an exact O (n log n) -time algorithm and (ii) a (1 + ε) -approximation algorithm that runs in O (n + ε − 1 log ε − 1) time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Computing the longest common almost-increasing subsequence.
- Author
-
Bhuiyan, Mohammad Tawhidul Hasan, Alam, Muhammad Rashed, and Rahman, M. Sohel
- Subjects
- *
PROBLEM solving - Abstract
In this paper, we revisit the problem of computing longest common almost increasing subsequence (LCAIS) where, given two input sequences, the goal is to compute a common subsequence that is 'almost' increasing. Here the concept of an almost increasing subsequence offers an interesting relaxation over the increasing condition. This problem has been studied in the literature albeit with some constraints. Here, we present a number of a number of algorithmic approaches to solve the problem more generally and efficiently. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. A simple linear time algorithm to solve the MIST problem on interval graphs.
- Author
-
Li, Peng, Shang, Jianhui, and Shi, Yi
- Subjects
- *
GRAPH connectivity , *PROBLEM solving , *ALGORITHMS , *GRAPH algorithms , *TELECOMMUNICATION systems , *POLYNOMIAL time algorithms - Abstract
Motivated by the design of cost-efficient communication networks, the problem about Maximum Internal Spanning Tree that arises in a connected graph, is proposed to find a spanning tree with the maximum number of internal vertices. In 2018, Xingfu Li et al. presented a polynomial algorithm to find a maximum internal spanning tree in a connected interval graph. Based on the structure of normal orderings on interval graphs, we present a simple linear time algorithm that solve the problem when restricted to connected interval graphs in this paper. The proof provides additional insight about the linear time algorithm on interval graphs. • The MIST problem is solvable in a simple linear time algorithm under the restriction of connected interval graphs. • The structure of normal orderings on interval graphs is proposed to solve the Maximum Internal Spanning Tree problem. • The proof of the main results is more easy to understand with proper cases to prove. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. The work function algorithm for the paging problem.
- Author
-
Zhang, Wenming, Hu, Huan, Cheng, Yongxi, Li, Hongmei, and Wang, Haizhen
- Subjects
- *
ALGORITHMS , *ONLINE algorithms , *DECISION making - Abstract
The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults in a two-level store. It is a special case of the famous k -server problem. In this paper, we firstly show that WFA is k M / m -competitive for the k -server problem where the distances between two points are all in [ m , M ] (0 < m ≤ M), and thus WFA is k -competitive for the paging problem. And we further show that the well known F I F O for the paging problem is just a special case of WFA. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Efficient algorithms for ride-hitching in UAV travelling.
- Author
-
Li, Songhua, Li, Minming, Duan, Lingjie, and Lee, Victor C.S.
- Subjects
- *
ONLINE algorithms , *TRAVEL time (Traffic engineering) , *GREEDY algorithms , *DRONE aircraft , *ALGORITHMS - Abstract
The unmanned aerial vehicle (UAV) has emerged as a promising solution to provide delivery and other mobile services to customers rapidly, yet it drains its stored energy quickly when travelling on the way and (even if solar-powered) it takes time for recharging on the way before reaching the destination. To address this issue, existing works focus more on UAV's offline path planning with designated system vehicles providing charging service. Nevertheless, in some emergency cases and rural areas where system vehicles are not available, public vehicles can provide more cost-saving and feasible service in UAV travelling. In this paper, we explore how a single UAV can save flying distance by exploiting public vehicles for the purpose of minimizing the overall travel time of the UAV, which is from the perspective of online algorithm. For the offline setting where the information of future vehicles is known far ahead of time, we present an O (n 2) -time shortest-path-like optimal solution by delicately transforming the problem into a graph capturing both time and energy constraints. For the online setting where public vehicles appear in real-time and only inform the UAV of their trip information some certain time Δ t beforehand, we first construct lower bounds on the competitive ratio for different Δ t. Then, we propose two online algorithms, including a greedy algorithm MyopicHitching that greedily hitches truck rides and an improved algorithm Δ t - Adaptive that further tolerates a waiting time in hitching a ride. Our theoretical analysis shows that Δ t - Adaptive is asymptotically optimal in the sense that its ratio approaches the proposed lower bounds as Δ t increases. • Mathematical formulation of ride-hitching in UAV travelling. • An O (n 2) -time shortest-path-like optimal offline algorithm. • Lower bounds on the competitive ratio of the problem. • Find a defect of a greedy online algorithm. • A near-optimal time-adaptive online algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Approximation algorithm for prize-collecting sweep cover with base stations.
- Author
-
Liang, Wei and Zhang, Zhao
- Subjects
- *
APPROXIMATION algorithms , *ALGORITHMS - Abstract
• Propose a new problem: prize-collecting sweep cover problem. • Give a 5-LMP algorithm for prize-collecting sweep cover assuming constant number of base-stations. • Give a 2-LMP algorithm for the rooted prize-collecting forest with k components problem. In a sweep cover problem, positions of interest (PoIs) are required to be visited periodically by mobile sensors. In this paper, we propose a new sweep cover problem: the prize-collecting sweep cover problem (PCSC), in which penalty is incurred by those PoIs which are not sweep-covered, and the goal is to minimize the covering cost plus the penalty. Assuming that every mobile sensor has to be linked to some base station, and the number of base stations is upper bounded by a constant, we present a 5-LMP (Lagrangian Multiplier Preserving) algorithm. As a step stone, we propose the prize-collecting forest with k components problem (PCF k), which might be interesting in its own sense, and presented a 2-LMP for rooted PCF k. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. One-visibility cops and robber on trees: Optimal cop-win strategies.
- Author
-
Yang, Boting
- Subjects
- *
ROBBERS , *TREES , *ALGORITHMS - Abstract
• For trees, we introduce a key structure, called road, for characterising optimal cop-win strategies. • We give an O (n ﹨ log n) time algorithm to compute an optimal cop-win strategy for a tree with n vertices. • We establish relations between zero-visibility and one-visibility copnumbers on trees. In the one-visibility cops and robber game on a graph, the robber is visible to the cops only when the robber is in the closed neighbourhood of the vertices occupied by the cops. The one-visibility copnumber of a graph is the minimum number of cops required to capture the robber on the graph. In this paper, we investigate the one-visibility cops and robber game on trees. For trees, we introduce a key structure, called road, for characterising optimal cop-win strategies. We give an O (n log n) time algorithm to compute an optimal cop-win strategy for a tree with n vertices. We also establish relations between zero-visibility and one-visibility copnumbers on trees. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Efficient computation of arbitrary control dependencies.
- Author
-
Léchenet, Jean-Christophe, Kosmatov, Nikolai, and Le Gall, Pascale
- Subjects
- *
DIRECTED graphs , *GRAPH theory , *ALGORITHMS - Abstract
In 2011, Danicic et al. introduced an elegant generalization of the notion of control dependence for any directed graph. They also proposed an algorithm computing the weak control-closure of a subset of graph vertices and performed a paper-and-pencil proof of its correctness. We have performed its machine-checked proof in the Coq proof assistant. This paper also presents a novel, more efficient algorithm called lDFS to compute weak control-closure taking benefit of intermediate propagation results of previous iterations in order to accelerate the following ones. This optimization makes the design of the algorithm more complex and requires subtle loop invariants for its proof. lDFS has been formalized and mechanically proven in the Why3 verification tool. To investigate the impact of several possible optimizations and compare the performances of different versions of the algorithm, we perform experiments on arbitrary generated graphs with up to hundreds of thousands of vertices. They demonstrate that the proposed algorithm remains practical for real-life programs and significantly outperforms all considered versions of Danicic's initial technique. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Better guarantees for k-median with service installation costs.
- Author
-
Zhang, Zhen, Zhou, Yipeng, and Yu, Shaoqian
- Subjects
- *
APPROXIMATION algorithms , *COST , *ALGORITHMS - Abstract
k-median with service installation costs is a frequently encountered problem in applications involving multiple service requirements, which generalizes the standard k -median problem in that each client is associated with a specific service and can only be connected to the facilities where the service is installed. In this paper, we give a (13.349 + ϵ) -approximation algorithm for the k -median with service installation costs problem, improving upon the previous best approximation ratio of 18. We propose a new deterministic rounding approach to deal with the challenges caused by the service requirements, which is the crucial step in getting the improved ratio. • We give a (13.349 + ϵ) -approximation algorithm for the k -median with service installation costs problem. • We obtain a well-behaved fractional solution to the problem based on the framework of Lagrangian relaxation. • We give a new deterministic rounding approach to deal with the challenges caused by the service requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. A linear time algorithm for connected p-centdian problem on block graphs.
- Author
-
Nguyen, Kien Trung, Teh, Wen Chean, Hung, Nguyen Thanh, and Nguyen-Thu, Huong
- Subjects
- *
NP-complete problems , *CHARTS, diagrams, etc. , *ALGORITHMS - Abstract
We study the problem of finding a set of p connected vertices in a block graph to minimize the convex combination of the median and the center objective functions. This problem is called the connected p -centdian problem on block graphs. In the scope of this paper, we always focus on unweighted block graphs. For block graphs with two different edge lengths, the problem is NP-complete. For block graphs with uniform edge lengths, we focus on a special case of the connected p-centdian problem, namely where the median and the center functions receive equal weight. Concerning this specific problem, we consider a lexicographic order based on certain labels of the vertices and prove that there exists a connected p -centdian that contains a predetermined 1-centdian on the underlying graph. Applying this result, we develop a linear time algorithm for the problem. Finally, the problem under the general convex combination of the median and the center functions is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. A method to calculate the number of spanning connected unicyclic(bicyclic) subgraphs in 2-separable networks.
- Author
-
Liang, Jing, Zhao, Haixing, and Yin, Jun
- Subjects
- *
ENTROPY , *ALGORITHMS - Abstract
The numbers of spanning connected unicyclic subgraphs and spanning connected bicyclic subgraphs are essential parameters in studying network reliability. However, these two parameters have hardly been studied. Only in 1983, Liu studied the enumeration of spanning connected unicyclic(bicyclic) subgraphs, which is mainly calculated by the determinant of the cycle adjacency matrix. But for large networks, the calculation of the determinant of cycle adjacency matrix is extremely complex. In this paper, we present a reduction method to compute the number of spanning connected unicyclic(bicyclic) subgraphs of 2-separable networks. As an application, we enumerate the number of spanning connected unicyclic(bicyclic) subgraphs in fractal scale-free networks by linear algorithm. The exact expression of the number of spanning connected unicyclic(bicyclic) subgraphs is obtained. On this basis, the entropy of the number of spanning connected unicyclic(bicyclic) subgraphs is determined. This research provides helpful guidance for computing the number of spanning connected unicyclic(bicyclic) subgraphs of general 2-connected networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. An efficient algorithm for the longest common palindromic subsequence problem.
- Author
-
Liang, Ting-Wei, Yang, Chang-Biau, and Huang, Kuo-Si
- Subjects
- *
ALGORITHMS , *DYNAMIC programming , *PROBLEM solving , *PALINDROMES , *POLYNOMIAL time algorithms - Abstract
The longest common palindromic subsequence (LCPS) problem is a variant of the longest common subsequence (LCS) problem. Given two input sequences A and B , the LCPS problem is to find the common subsequence of A and B such that the answer is a palindrome with the maximal length. The LCPS problem was first proposed by Chowdhury et al. (2014) [9] , who proposed a dynamic programming algorithm with O (m 2 n 2) time, where m and n denote the lengths of A and B , respectively. This paper proposes a diagonal-based algorithm for solving the LCPS problem with O (L (m − L) R log n) time and O (R L) space, where R denotes the number of match pairs between A and B , and L denotes the LCPS length. In our algorithm, the 3-dimensional minima finding algorithm is invoked to overcome the domination problem. As experimental results show, our algorithm is efficient practically compared with some previously published algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Connectivity and constructive algorithms of disjoint paths in dragonfly networks.
- Author
-
Wu, Suying, Fan, Jianxi, Cheng, Baolei, Yu, Jia, and Wang, Yan
- Subjects
- *
DRAGONFLIES , *HIGH performance computing , *ALGORITHMS - Abstract
Dragonfly networks have been widely used in the current High Performance Computing (HPC) computers due to lower global network diameter and other advantages of communication performance such as modularity and cost-effectiveness. The original definition of the dragonfly network was very loose on account of its uncertain and diversified global link arrangements. In this paper, we study the logical structure of the dragonfly network which can be treated as a compound graph of complete graphs. Firstly, we give the general definition of the dragonfly network, named D F (n , h , g) , and the specific definition of the dragonfly network under the relative global link arrangement, named D (n , h). Then, we prove that the connectivity of D (n , h) is n − 1 + h. In the end, we propose an O (n) algorithm to give the disjoint path between any two distinct vertices in D (n , h) and analyze the maximum length of these disjoint paths which is no more than 7. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. On the average-case complexity of pattern matching with wildcards.
- Author
-
Barton, Carl
- Subjects
- *
HUFFMAN codes , *ALGORITHMS - Abstract
Pattern matching with wildcards is a string matching problem with the goal of finding all factors of a text t of length n that match a pattern x of length m , where wildcards (characters that match everything) may be present. In this paper we present a number of complexity results and fast average-case algorithms for pattern matching where wildcards are allowed in the pattern, however, the results are easily adapted to the case where wildcards are allowed in the text as well. We analyse the average-case complexity of these algorithms and derive non-trivial time bounds. These are the first results on the average-case complexity of pattern matching with wildcards which provide a provable separation in time complexity between exact pattern matching and pattern matching with wildcards. We introduce the wc-period of a string which is the period of the binary mask x b where x b [ i ] = a iff x [ i ] ≠ ϕ and b otherwise. We denote the length of the wc-period of a string x by ▪. We show the following results for constant 0 < ϵ < 1 and a pattern x of length m and g wildcards with ▪ the prefix of length p contains g p wildcards: • If lim m → ∞ g p p = 0 there is an optimal algorithm running in O (n log σ m m) -time on average. • If lim m → ∞ g p p = 1 − ϵ there is an algorithm running in O (n log σ m log 2 p m) -time on average. • If lim m → ∞ g m = lim m → ∞ 1 − f (m) = 1 any algorithm takes at least Ω (n log σ m f (m)) -time on average. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Faster algorithm for pathwidth one vertex deletion.
- Author
-
Tsur, Dekel
- Subjects
- *
ALGORITHMS , *GRAPH algorithms - Abstract
In the Pathwidth One Vertex Deletion (POVD) problem the input is a graph G and an integer k , and the goal is to decide whether there is a set of at most k vertices whose removal from G results in a graph with pathwidth at most 1. In this paper we give an algorithm for POVD whose running time is O ⁎ (3.888 k). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Fragile complexity of adaptive algorithms.
- Author
-
Bose, Prosenjit, Cano, Pilar, Fagerberg, Rolf, Iacono, John, Jacob, Riko, and Langerman, Stefan
- Subjects
- *
ALGORITHMS - Abstract
The fragile complexity of a comparison-based algorithm is f (n) if each input element participates in O (f (n)) comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e., algorithms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity Θ (log k) , where k is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the k th smallest element has expected fragile complexity O (log log k) for the element selected. Deterministically finding the minimum element has fragile complexity Θ (log (Inv)) and Θ (log (Runs)) , where Inv is the number of inversions in a sequence and Runs is the number of increasing runs in a sequence. Deterministically finding the median has fragile complexity O (log (Runs) + log log n) and Θ (log (Inv)). Deterministic sorting has fragile complexity Θ (log (Inv)) but it has fragile complexity Θ (log n) regardless of the number of runs. • Fragile complexity parameterized by other quantity than the input size. • Predecessor has fragile complexity Θ (log k). • Randomized selection has expected fragile complexity O (log log k). • Minimum has fragile complexity Θ (log (Inv)) and Θ (log (Runs)). • Median has fragile complexity Θ (log (Inv)) and O (log (Runs) + log log n). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. In-place initializable arrays.
- Author
-
Katoh, Takashi and Goto, Keisuke
- Subjects
- *
DATA structures , *ALGORITHMS - Abstract
An initializable array is an array that supports the read and write operations for any element and the initialization of the entire array. This paper proposes a simple in-place algorithm to implement an initializable array of length N containing ℓ ∈ O (w) bits entries in N ℓ + 1 bits on the word RAM model with w bits word size, i.e., the proposed array requires only 1 extra bit on top of a normal array of length N containing ℓ bits entries. Our algorithm supports the all three operations in constant worst-case time, that is, it runs in-place using at most constant number of words O (w) bits during each operation. The time and space complexities are optimal since it was already proven that there is no implementation of an initializable array with no extra bit supporting all the operations in constant worst-case time (Hagerup and Kammer, 2017 [12]). Our algorithm significantly improves upon the best algorithm presented in the earlier studies (Navarro, 2014 [5]) which uses N + o (N) extra bits to support all the operations in constant worst-case time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A new algorithm for computing the nearest polynomial to multiple given polynomials via weighted ℓ2,q-norm minimization and its complex extension.
- Author
-
Hu, Wenyu, Huang, Huiying, Zhang, Rong, Huang, Jinhong, and Yi, Yun
- Subjects
- *
POLYNOMIALS , *ALGORITHMS , *PROBLEM solving , *CONCAVE functions - Abstract
A new algorithm is proposed in this paper for computing the nearest polynomial to multiple given polynomials with a given zero in the real case, where the distance between polynomials is defined by the weighted ℓ 2 , q norm (0 < q ≤ 2). First, the problem is formulated as a univariate constrained non-convex minimization problem, where prior information of the coefficients of the polynomial can be embedded by selecting proper weights. Then, an iteratively reweighted algorithm is designed to solve the obtained problem, and also the convergence and rate of convergence are uniformly demonstrated for all q in (0 , 2 ]. Since all the existing methods for computing the nearest polynomial to multiple given polynomials with a given zero are limited to the real case, we ingeniously extend the results to the complex case. Finally, two representative examples that separately compute the nearest real and complex polynomials are presented to show the effectiveness of the proposed algorithm. • A weighted ℓ 2 , q -norm based minimization problem is constructed with prior information of polynomial embedded. • After converting the problem into a univariate problem, an iteratively reweighted algorithm is proposed to solve it. • Using properties of a concave function, convergence and rate of convergence of the proposed algorithm are provided. • It is the first time to study computation of the nearest polynomial to multiple given polynomials with a given zero in the complex case. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Self-stabilizing synchronous unison in directed networks.
- Author
-
Altisen, Karine, Cournier, Alain, Defalque, Geoffrey, and Devismes, Stéphane
- Subjects
- *
DIRECTED graphs , *UNDIRECTED graphs , *TOPOLOGY , *SYNCHRONIZATION , *ALGORITHMS - Abstract
Self-stabilization is a general paradigm that characterizes the ability of a distributed system to recover from transient faults. Since its introduction by Dijkstra in 1974, self-stabilization has been successfully applied to efficiently solve many networking tasks. However, most of the literature focuses on bidirectional networks. Now, in today's networks such as WSNs, some communication channels may be one-way only. Considering such network topologies, a.k.a. directed graphs, makes self-stabilization more complicated, and sometimes even impossible. In this paper, we investigate the gap in terms of requirements and efficiency when considering a directed graph instead of an undirected one as network topology for a self-stabilizing algorithm. Our case study is a variant of a synchronous unison algorithm proposed by Arora et al.; the synchronous unison being a clock synchronization problem. • We address self-stabilization of dynamic problems in anonymous directed networks. • We propose a topology-related necessary condition to solve synchronous unison in directed networks. • We present a simple self-stabilizing synchronous unison algorithm for directed networks. • We study conditions under which the proposed algorithm is self-stabilizing, and at what cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A hybrid modified PSO algorithm for the inverse p-median location problem in fuzzy random environment.
- Author
-
Taghikhani, Sepideh, Baroughi, Fahimeh, and Alizadeh, Behrooz
- Subjects
- *
PARTICLE swarm optimization , *VALUE at risk , *ALGORITHMS , *RANDOM variables , *NP-hard problems - Abstract
This paper considers the inverse p -median location problem with variable edge lengths and variable vertex weights on general graphs in which the modification costs are the fuzzy random variables. We present a model for the problem in fuzzy random environment in which the objective value is computed by conditional value at risk criterion. Then, we show that the problem is NP-hard under this criterion. Therefore, a new hybrid modified particle swarm optimization algorithm is proposed to obtain the approximate optimal solution of the proposed model. Finally, computational experiments are given to illustrate high efficiency of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Complexity of a root clustering algorithm for holomorphic functions.
- Author
-
Batra, Prashant and Sharma, Vikram
- Subjects
- *
ALGORITHMS , *EXPONENTIAL functions , *TRIGONOMETRIC functions , *HOLOMORPHIC functions , *TREE size , *COMPUTER workstation clusters , *TRANSCENDENTAL functions , *SUBDIVISION surfaces (Geometry) - Abstract
Approximating the roots of a holomorphic function in an input box is a fundamental problem in many domains. Most algorithms in the literature for solving this problem are conditional, i.e., they make some simplifying assumptions, such as, all the roots are simple or there are no roots on the boundary of the input box, or the underlying machine model is Real RAM. Root clustering is a generalization of the root approximation problem that allows for errors in the computation and makes no assumption on the multiplicity of the roots. An unconditional algorithm for computing a root clustering of a holomorphic function was given by Yap, Sagraloff and Sharma in 2013 [36]. They proposed a subdivision based algorithm using effective predicates based on Pellet's test while avoiding any comparison with zeros (using soft zero comparisons instead). In this paper, we analyze the running time of their algorithm. We use the continuous amortization framework to derive an upper bound on the size of the subdivision tree. We specialize this bound to the case of polynomials and some simple transcendental functions such as exponential and trigonometric sine. We show that the algorithm takes exponential time even for these simple functions, unlike the case of polynomials. We also derive a bound on the bit-precision used by the algorithm. To the best of our knowledge, this is the first such result for holomorphic functions. We introduce new geometric parameters, such as the relative growth of the function on the input box, for analyzing the algorithm. Thus, our estimates naturally generalize the known results, i.e., for the case of polynomials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Algorithm and hardness results in double Roman domination of graphs.
- Author
-
Poureidi, Abolfazl
- Subjects
- *
DOMINATING set , *CHARTS, diagrams, etc. , *ALGORITHMS , *STATISTICAL decision making , *HARDNESS , *ROMANS , *COMPUTATIONAL complexity - Abstract
Let G = (V , E) be a graph. A double Roman dominating function (DRDF) of G is a function f : V → { 0 , 1 , 2 , 3 } such that (i) each vertex v with f (v) = 0 is adjacent to either a vertex u with f (u) = 3 or two vertices u 1 and u 2 with f (u 1) = f (u 2) = 2 , and (ii) each vertex v with f (v) = 1 is adjacent to a vertex u with f (u) > 1. The double Roman domination number of G is the minimum weight of a DRDF along all DRDFs on G , where the weight of a DRDF f on G is f (V) = ∑ v ∈ V f (v). In this paper, we first propose an algorithm to compute the double Roman domination number of an interval graph G = (V , E) in O (| V | + | E |) time, answering a problem posed in Banerjee et al. (2020) [2]. Next, we show that the decision problem associated with the double Roman domination is NP-complete for split graphs. Finally, we show that the computational complexities of the Roman domination problem and the double Roman domination problem are different. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. Revisiting orthogonal lattice attacks on approximate common divisor problems.
- Author
-
Xu, Jun, Sarkar, Santanu, and Hu, Lei
- Subjects
- *
ALGORITHMS , *COST - Abstract
In this paper, we revisit three existing types of orthogonal lattice (OL) attacks and propose optimized cases to solve approximate common divisor (ACD) problems. In order to reduce both space and time costs, we also make an improved lattice using the rounding technique. Further, we present asymptotic formulas of the time complexities on our optimizations as well as three known OL attacks. Besides, we give specific conditions that the optimized OL attacks can work and show how the attack ability depends on the blocksize β in the BKZ- β algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Contention-related crash failures: Definitions, agreement algorithms, and impossibility results.
- Author
-
Durand, Anaïs, Raynal, Michel, and Taubenfeld, Gadi
- Subjects
- *
ALGORITHMS , *DEFINITIONS , *PROBLEM solving , *TRAFFIC safety , *CLINICAL trial registries - Abstract
This article explores an interplay between process crash failures and concurrency. Namely, it aims at answering the question, "Is it possible to cope with more crash failures when some number of crashes occur before some predefined contention point happened?". These crashes are named λ-constrained crashes , where λ is the predefined contention point (known by the processes). Hence, this article considers two types of process crashes: λ -constrained crashes and classical crashes (which can occur at any time and are consequently called any-time crashes). Considering a system made up of n asynchronous processes communicating through atomic read/write registers, the article focuses on the design of two agreement-related algorithms. Assuming λ = n − 1 and no any-time failure, the first algorithm solves the consensus problem in the presence of one λ -constrained crash failure, thereby circumventing the well-known FLP impossibility result. The second algorithm considers k -set agreement for k ≥ 2. It is a k -set agreement algorithm such that λ = n − ℓ and ℓ ≥ k = m + f that works in the presence of up to (2 m + ℓ − k) λ -constrained crashes and (f − 1) any-time crashes, i.e., up to t = (2 m + ℓ − k) + (f − 1) process crashes. It follows that considering the timing of failures with respect to a predefined contention point enlarges the space of executions in which k -set agreement can be solved despite the combined effect of asynchrony, concurrency, and process crashes. The paper also presents agreement-related impossibility results for consensus and k -set agreement in the context of λ -constrained process crashes (with or without any-time crashes). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.