849 results
Search Results
2. Special Issue on Selected Papers from the 11th International Conference and Workshops on Algorithms and Computation (WALCOM 2017).
- Author
-
Rahman, Md. Saidur, Poon, Sheung-Hung, and Yen, Hsu-Chun
- Subjects
- *
CONFERENCES & conventions , *ALGORITHMS , *COMPUTATIONAL geometry , *POLYGONS - Published
- 2019
- Full Text
- View/download PDF
3. Process-commutative distributed objects: From cryptocurrencies to Byzantine-Fault-Tolerant CRDTs.
- Author
-
Frey, Davide, Guillou, Lucie, Raynal, Michel, and Taïani, François
- Subjects
- *
PETRI nets , *FAULT tolerance (Engineering) , *FIRST in, first out (Queuing theory) , *POLYCYSTIC ovary syndrome , *ALGORITHMS - Abstract
This paper explores the territory that lies between best-effort Byzantine-Fault-Tolerant Conflict-free Replicated Data Types (BFT CRDTs) and totally ordered distributed ledgers, such as those implemented by Blockchains. It formally characterizes a novel class of distributed objects that only requires a First In First Out (FIFO) order on the object operations from each process (taken individually). The formalization leverages Mazurkiewicz traces to define legal sequences of operations and ensure both Strong Eventual Consistency (SEC) and Pipleline Consistency (PC). The paper presents a generic algorithm that implements this novel class of distributed objects both in a crash and Byzantine setting. It also illustrates the practical interest of the proposed approach using four instances of this class of objects, namely money transfer, Petri nets, multi-sets, and concurrent work stealing dequeues. • We introduce PCOs, a novel class of distributed objects combining Strong Eventual Consistency and Pipeline Consistency. • We present a generic algorithm that implements any PCO, and works both in a crash and Byzantine fault model. • Finally, we present four concrete examples of PCOs: multisets, Petri Nets, money transfer objects, and work stealing deques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. On the probe problem for (r,ℓ)-well-coveredness: Algorithms and complexity.
- Author
-
Faria, Luerbio and Souza, Uéverton S.
- Subjects
- *
INDEPENDENT sets , *CHARTS, diagrams, etc. , *ALGORITHMS - Abstract
• C -probe graphs are graphs having stable sets N for which edges can be added to obtain a supergraph in C. • A graph G is an (r , ℓ) -graph if V (G) can be partitioned into r independent sets and ℓ cliques. • A graph G is well-covered if every maximal independent set is also maximum. • This paper studies the complexity of recognizing probe graphs for the class of (r , ℓ) -well-covered graphs. Let C be a class of graphs. A graph G = (V , E) is C -probe if V (G) can be partitioned into two sets: non-probes N and probes P , where N is an independent set and new edges may be added between some non-probe vertices such that the resulting graph is in the class C. In this case, we say that (N , P) is a C - probe partition of G. In the Unpartitioned Probe problem for a graph class C we are given a graph G and asked whether G has a C -probe partition, i.e., such a problem consist of recognizing the class of C -probe graphs. A graph G = (V , E) is an (r , ℓ) -graph when V can be partitioned into (S 1 , S 2 , ... , S r , K 1 , K 2 , ... , K ℓ) such that S 1 , S 2 , ... , S r are independent sets, and K 1 , K 2 , ... , K ℓ are cliques. A graph G is well-covered if every maximal independent set is also maximum, and it is (r , ℓ) -well-covered if it is well-covered as well as an (r , ℓ) -graph. In this paper, we study the complexity of the Unpartitioned Probe problem for the class of (r , ℓ) -well-covered graphs. We classify all but the (2 , 0) and (1 , 2) cases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Restriction on cut rule in cyclic-proof system for symbolic heaps.
- Author
-
Saotome, Kenji, Nakazawa, Koji, and Kimura, Daisuke
- Subjects
- *
MATHEMATICAL logic , *CALCULUS , *LOGIC , *ALGORITHMS , *DEFINITIONS - Abstract
Symbolic heaps, which are a restricted class of separation logic formulas, with inductive definitions are a suitable language in automated verification systems for memory-manipulating programs. In this context, some related problems, e.g., the entailment problem, have been studied theoretically. In addition, several solvers for the entailment problem based on the proof-search algorithm in cyclic-proof systems, which are proof systems in sequent calculus style with cyclic structure, have been proposed. However, the cut-elimination property generally does not hold for cyclic-proof systems of symbolic heaps with inductive definitions, which means that searching for a cut-free proof is insufficient. In other words, we hope to find a reasonable proof-search algorithm considering the cut rule or we give up on obtaining a complete proof-search procedure. This paper investigates this issue and demonstrates a limit to the challenge of the restrictions on the cut rule in a cyclic-proof system for symbolic heaps. We propose a restricted cut rule, referred to as the presumable cut, which is a relaxed variant of the analytic cut, in which the cut formula must be a subformula of the bottom sequent. This paper demonstrates that the provability of the cyclic-proof system for symbolic heaps becomes strictly weaker by restricting the cut rule to the presumable cut. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Formal verification of parallel prefix sum and stream compaction algorithms in CUDA.
- Author
-
Safari, Mohsen and Huisman, Marieke
- Subjects
- *
COMPACTING , *PARALLEL algorithms , *SUFFIXES & prefixes (Grammar) , *HIGH performance computing , *ROCK glaciers , *ALGORITHMS - Abstract
• We add CUDA verification support to the VerCors verifier. • We verify data race-freedom and functional correctness of CUDA implementations of two parallel prefix sum algorithms. • We verify data race-freedom and functional correctness of CUDA implementations of stream compaction algorithm. GPUs are an important part of any High Performance Computing (HPC) architecture. To make optimal use of the specifics of a GPU architecture, we need programming models that naturally support the parallel execution model of a GPU. CUDA and OpenCL are two widely used examples of such programming models. Furthermore, we also need to redesign algorithms such that they adhere to this parallel programming model, and we need to be able to prove the correctness of these redesigned algorithms. In this paper we study two examples of such parallelized algorithms, and we discuss how to prove their correctness (data race freedom and (partial) functional correctness) using the VerCors program verifier. First of all, we prove the correctness of two parallel algorithms solving the prefix sum problem. Second, we show how such a prefix sum algorithm is used as a basic block in a stream compaction algorithm, and we prove correctness of this stream compaction algorithm, taking advantage of the earlier correctness proof for the prefix sum algorithm. The proofs as described in this paper are developed over the CUDA implementations of these algorithms. In earlier work, we had already shown correctness of a more high-level version of the algorithm. This paper discusses how we add support to reason about CUDA programs in VerCors, and it then shows how we can redo the verification at the level of the CUDA code. We also discuss some practical challenges that we had to address to prove correctness of the actual CUDA-level verifications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Algorithms for diameters of unicycle graphs and diameter-optimally augmenting trees.
- Author
-
Wang, Haitao and Zhao, Yiming
- Subjects
- *
DIAMETER , *PROBLEM solving , *ALGORITHMS , *TREES , *SAWLOGS - Abstract
We consider the problem of computing the diameter of a unicycle graph (i.e., a graph with a unique cycle). We present an O (n) time algorithm for the problem, where n is the number of vertices of the graph. This improves the previous best O (n log n) time solution [Oh and Ahn, ISAAC 2016]. Using this algorithm as a subroutine, we solve the problem of adding a shortcut to a tree so that the diameter of the new graph (which is a unicycle graph) is minimized; our algorithm takes O (n 2 log n) time and O (n) space. The previous best algorithms solve the problem in O (n 2 log 3 n) time and O (n) space [Oh and Ahn, ISAAC 2016], or in O (n 2) time and O (n 2) space [Bilò, ISAAC 2018]. • The paper presents a linear time algorithm for computing the diameter of unicycle graphs. • The paper gives a new algorithm which finds a new edge to add to a tree so that the diameter of the new graph is minimized. • The algorithms rely on certain domination relations among edges of the graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. The Steiner k-eccentricity on trees.
- Author
-
Li, Xingfu, Yu, Guihai, Klavžar, Sandi, Hu, Jie, and Li, Bo
- Subjects
- *
TREES , *ALGORITHMS , *GRAPH algorithms - Abstract
We study the Steiner k -eccentricity on trees, which generalizes the previous one in the paper [On the average Steiner 3-eccentricity of trees, arXiv:2005.10319 ]. We achieve much stronger properties for the Steiner k -ecc tree than that in the previous paper. Based on this, a linear time algorithm is devised to calculate the Steiner k -eccentricity of a vertex in a tree. On the other hand, lower and upper bounds of the average Steiner k -eccentricity index of a tree on order n are established based on a novel technique which is quite different and also much easier to follow than the earlier one. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Set-constrained delivery broadcast: A communication abstraction for read/write implementable distributed objects.
- Author
-
Imbs, Damien, Mostéfaoui, Achour, Perrin, Matthieu, and Raynal, Michel
- Subjects
- *
ALGORITHMS , *BROADCASTING industry - Abstract
• New fault-tolerant communication abstraction. • Simple implementations of read/write implementable objects. • Efficient implementations of snapshots and lattice agreement. This paper introduces a new communication abstraction, called Set-Constrained Delivery Broadcast (SCD-broadcast), whose aim is to provide its users with an appropriate abstraction level when they have to implement objects or distributed tasks in an asynchronous message-passing system prone to process crash failures. This abstraction allows each process to broadcast messages and deliver a sequence of sets of messages in such a way that, if a process delivers a set of messages including a message m and later delivers a set of messages including a message m ′ , no process delivers first a set of messages including m ′ and later a set of message including m. After having presented an algorithm implementing SCD-broadcast, the paper investigates its programming power and its computability limits. On the "power" side it presents SCD-broadcast-based algorithms, which are both simple and efficient, building objects (such as snapshot and conflict-free replicated data types), and distributed tasks. On the "computability limits" side it shows that SCD-broadcast and read/write registers are computationally equivalent. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Approximation algorithms for fuzzy C-means problem based on seeding method.
- Author
-
Liu, Qian, Liu, Jianxin, Li, Min, and Zhou, Yang
- Subjects
- *
APPROXIMATION algorithms , *ALGORITHMS , *FUZZY algorithms , *SEEDS - Abstract
• A new seeding algorithm is proposed for the fuzzy C-means problem. • Theoretical guarantees for both the new seeding algorithm and the fuzzy C-means++ algorithm are given. • Numerical experiments show the effectiveness of the new algorithm. As a kind of important soft clustering model, the fuzzy C -means method is widely applied in many fields. In this method, instead of the strict distributive ability in the classical k -means method, all the sample points are endowed with degrees of membership to each center to depict the fuzzy clustering. In this paper, we show that the fuzzy C -means++ algorithm, which introduces the k -means++ algorithm as a seeding strategy, gives a solution for which the approximation guarantee is O (k 2 ln k). A novel seeding algorithm is then designed based on the contribution of the fuzzy potential function, which improves the approximation ratio to O (k ln k). Preliminary numerical experiments are proposed to support the theoretical results of this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. The fewest clues problem.
- Author
-
Demaine, Erik D., Ma, Fermi, Schvartzman, Ariel, Waingarten, Erik, and Aaronson, Scott
- Subjects
- *
PUZZLES , *COMPUTATIONAL complexity , *ALGORITHMS , *THEORY of distributions (Functional analysis) , *TRIANGLES - Abstract
Abstract When analyzing the computational complexity of well-known puzzles, most papers consider the algorithmic challenge of solving a given instance of (a generalized form of) the puzzle. We take a different approach by analyzing the computational complexity of designing a "good" puzzle. We assume a puzzle maker designs part of an instance, but before publishing it, wants to ensure that the puzzle has a unique solution. Given a puzzle, we introduce the FCP (fewest clues problem) version of the problem: Given an instance to a puzzle, what is the minimum number of clues we must add in order to make the instance uniquely solvable? We analyze this question for the Nikoli puzzles Sudoku, Shakashaka, and Akari. Solving these puzzles is NP -complete, and we show their FCP versions are Σ 2 P -complete. Along the way, we show that the FCP versions of Triangle Partition , Planar 1- in -3 SAT , and Latin Square are all Σ 2 P -complete. We show that even problems in P have difficult FCP versions, sometimes even Σ 2 P -complete, though "closed under cluing" problems are in the (presumably) smaller class NP ; for example, FCP 2 SAT is NP -complete. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
12. Incidence coloring of Mycielskians with fast algorithm.
- Author
-
Bi, Huimin and Zhang, Xin
- Subjects
- *
ALGORITHMS , *GRAPH coloring , *ASSIGNMENT problems (Programming) , *COLORING matter , *COLORS - Abstract
An incidence of a graph G is a vertex-edge pair (v , e) such that the vertex v is incident with the edge e. A proper incidence k-coloring of a graph is a coloring of its incidences involving k colors so that two incidences (u , e) and (w , f) receive distinct colors if and only if u = w , or e = f , or u w ∈ { e , f }. In this paper, we present some idea of using the incidence coloring to model a kind of multi-frequency assignment problem , in which each transceiver can be simultaneously in both sending and receiving modes, and then establish some theoretical and algorithmic aspects of the incidence coloring. Specifically, we conjecture that if G is the Mycielskian of some graph then it has a proper incidence (Δ (G) + 2) -coloring. Actually, our conjecture is motivated by the " (Δ + 2) conjecture" of Brualdi and Quinn Massey in 1993, which states that every graph G has a proper incidence (Δ (G) + 2) -coloring, and was disproved in 1997 by Guiduli, who pointed out that the Paley graphs with large maximum degree are counterexamples (yet they are all known counterexamples to the " (Δ + 2) conjecture", and are not Mycielskians of any graph). To support our conjecture, we prove in this paper that if G is the Mycielskian of a graph H with | H | ≥ 3 Δ (H) + 2 , then we can construct a proper incidence (Δ (G) + 1) -coloring of G in cubic time, and if G is the Mycielskian of an incidence (Δ (H) + 1) -colorable graph H with | H | ≤ 2 Δ (H) , or the Mycielskian of an incidence (Δ (H) + 2) -colorable graph H with | H | ≥ 2 Δ (H) + 1 , then G has a proper incidence (Δ (G) + 2) -coloring. The minimum positive integer k such that the Mycielskian of a cycle or a path has a proper incidence k -coloring is also determined. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. A self-stabilizing algorithm for constructing a minimal reachable directed acyclic graph with two senders and two targets.
- Author
-
Kim, Yonghwan, Shibata, Masahiro, Sudo, Yuichi, Nakamura, Junya, Katayama, Yoshiaki, and Masuzawa, Toshimitsu
- Subjects
- *
DIRECTED acyclic graphs , *ALGORITHMS , *UNDIRECTED graphs , *DIRECTED graphs - Abstract
In this paper, we introduce a new graph structure named an ST -reachable directed acyclic graph which is a directed acyclic graph (DAG) that guarantees reachability (i.e., a directed path exists) from every sender to every target. When an arbitrarily connected undirected graph G = (V , E) and two sets of the vertices, senders S (⊂ V) and targets T (⊂ V), are given, we consider the construction of a minimal ST -reachable DAG by changing some undirected edges to arcs and removing the remaining edges. In this paper, we present the necessary and sufficient condition under which a minimal ST -reachable DAG can be constructed when | S | ≤ 2 and | T | ≤ 2 , and propose a self-stabilizing algorithm for constructing a minimal ST -reachable DAG (if it exists) when an arbitrarily connected undirected graph, S (| S | ≤ 2) and T (| T | ≤ 2) are given. Moreover, our proposed algorithm can detect the non-existence of any ST -reachable DAG if the ST -reachable DAG of the given graph and two sets of vertices, S and T , do not exist. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. New algorithms for fair k-center problem with outliers and capacity constraints.
- Author
-
Wu, Xiaoliang, Feng, Qilong, Xu, Jinhui, and Wang, Jianxin
- Subjects
- *
APPROXIMATION algorithms , *POLYNOMIAL approximation , *METRIC spaces , *POLYNOMIAL time algorithms , *ALGORITHMS - Abstract
The fair k -center problem has been paid lots of attention recently. In the fair k -center problem, we are given a set X of points in a metric space and a parameter k ∈ Z + , where the points in X are divided into several groups, and each point is assigned a color to denote which group it is in. The goal is to partition X into k clusters such that the number of cluster centers with each color is equal to a given value, and the k -center problem objective is minimized. In this paper, we consider the fair k -center problem with outliers and capacity constraints, denoted as the fair k -center with outliers (F k CO) problem and the capacitated fair k -center (CF k C) problem, respectively. The outliers constraints allow up to z outliers to be discarded when computing the objective function, while the capacity constraints require that each cluster has size no more than L. In this paper, we design an Fixed-Parameter Tractability (FPT) approximation algorithm and a polynomial approximation algorithm for the above two problems. In particular, our algorithms give (1 + ϵ) -approximations with FPT time for the F k CO and CF k C problems in doubling metric space. Moreover, we also propose a 3-approximation algorithm in polynomial time for the F k CO problem with some reasonable assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A linear time algorithm for the r-gathering problem on the line.
- Author
-
Sarker, Anik, Sung, Wing-Kin, and Sohel Rahman, M.
- Subjects
- *
PROBLEM solving , *ALGORITHMS - Abstract
In this paper, we revisit the r -gathering problem. Given sets C and F of points on the plane and distance d (c , f) for each c ∈ C and f ∈ F , an r -gathering of C to F is an assignment A of C to open facilities F ′ ⊆ F such that r or more members of C are assigned to each open facility. The cost of an r -gathering is max c ∈ C d (c , A (c)). The r -gathering problem computes the r -gathering minimizing the cost. In this paper we study the r -gathering problem when C and F are on a line and present a O (| C | + | F |) -time algorithm to solve the problem. Our solution is optimal since any algorithm needs to read C and F at least once. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
16. Computing the multi-string BWT and LCP array in external memory.
- Author
-
Bonizzoni, Paola, Della Vedova, Gianluca, Pirola, Yuri, Previtali, Marco, and Rizzi, Raffaella
- Subjects
- *
MEMORY , *SUFFIXES & prefixes (Grammar) , *GENERALIZATION , *ALGORITHMS , *GENOMES - Abstract
Indexing very large collections of strings, such as those produced by the widespread next generation sequencing technologies, heavily relies on multi-string generalization of the Burrows–Wheeler Transform (BWT): large requirements of in-memory approaches have stimulated recent developments on external memory algorithms. The related problem of computing the Longest Common Prefix (LCP) array of a set of strings is instrumental to compute the suffix-prefix overlaps among strings, which is an essential step for many genome assembly algorithms. In a previous paper, we presented an in-memory divide-and-conquer method for building the BWT and LCP where we merge partial BWTs with a forward approach to sort suffixes. In this paper, we propose an alternative backward strategy to develop an external memory method to simultaneously build the BWT and the LCP array on a collection of m strings of different lengths. The algorithm over a set of strings having constant length k has O (m k l) time and I/O volume, using O (k + m) main memory, where l is the maximum value in the LCP array. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Range partitioning within sublinear time: Algorithms and lower bounds.
- Author
-
Ning, Baoling, Li, Jianzhong, and Jiang, Shouxu
- Subjects
- *
PARALLEL algorithms , *LINEAR orderings , *COMPUTING platforms , *ALGORITHMS , *DATA distribution - Abstract
• A lower bound for sampling cost of the range partitioning algorithm in the RAM model is proved. • Two lower bounds for sublinear external range partitioning algorithms are proved. • A sublinear external range partitioning algorithm for a novel data distribution is designed. Range partitioning is a typical and mostly used data partitioning method and has became a core operation in most of big data computing platforms. Given an input L of N data items admitting a total order, the goal of range partitioning is to divide the whole input into k ranges containing the same number of data items. There is a trivial lower bound Ω (N) for the exact partitioning algorithms, since they need to at least make a full scan of the whole data. In the context of big data computing, even algorithms with O (N) time are not always thought to be efficient enough, the ultimate goal of designing algorithms on big data is usually to solve problems within sublinear time. Therefore, it is well motivated and important to study sublinear algorithms for the range partitioning problem. The paper aims to answer three questions. For the internal memory (RAM) model, since sophisticated sampling based (ϵ , δ) -approximation partitioning algorithm with O (k log (N / δ) ϵ 2 ) time cost has been proposed, the first question is what a lower bound we can obtain for sublinear partitioning algorithms. For the external memory (I/O) model, as far as we know, no previous works give external partitioning algorithms with performance guarantee within sublinear time, therefore the two questions are what the upper bound and the lower bound we can achieve for sublinear external partitioning algorithms. To answer the above questions, based on the RAM and I/O model, the paper studies the lower and upper bounds for the range partitioning problem. For the RAM model, a lower bound Ω (k (1 − δ) ϵ 2 ) for the cost of sampling based partitioning algorithms is proved. For the I/O model, two lower bounds of the sampling cost required by sublinear external range partitioning algorithms are proved, which indicate that at least a full scan of the whole input is needed in the worst case and a general sublinear external partitioning algorithm does not exist. Motivated by the hard instances utilized in the proof of lower bounds, a model for describing the input distributions of the range partitioning problem in practical applications is proposed. Finally, for the special cases described by the model, a sublinear external partitioning algorithm with O (k log (N / δ) w B ϵ 2 ) I/O cost is designed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. A sub-linear time algorithm for approximating k-nearest-neighbor with full quality guarantee.
- Author
-
Ma, Hengzhao and Li, Jianzhong
- Subjects
- *
ALGORITHMS , *SURETYSHIP & guaranty , *TIME , *DISTANCES - Abstract
• We propose an algorithm that unifies the distance and recall criteria for the approximate k-Nearest-Neighbor problem. • The algorithm is the first one for the studied problem that provides theoretical guarantee on both approximation criteria. • The algorithm is the first one that achieves sub-linear query time while providing theoretical guarantees. In this paper we propose an algorithm for the approximate k-Nearest-Neighbors problem. According to the existing researches, there are two kinds of approximation criteria. One is the distance criterion, and the other is the recall criterion. All former algorithms suffer the problem that there are no theoretical guarantees for the two approximation criteria. The algorithm proposed in this paper unifies the two kinds of approximation criteria, and has full theoretical guarantees. Furthermore, the query time of the algorithm is sub-linear. As far as we know, it is the first algorithm that achieves both sub-linear query time and full theoretical approximation guarantee. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. The impact of the Gabriel subgraph of the visibility graph on the gathering of mobile autonomous robots.
- Author
-
Li, Shouwei, Meyer auf der Heide, Friedhelm, and Podlipyan, Pavel
- Subjects
- *
AUTONOMOUS robots , *MOBILE robots , *ALGORITHMS , *SPEED limits , *VISIBILITY - Abstract
In this paper, we reconsider the well-known discrete, round-based Go-To-The-Center algorithm due to Ando, Suzuki, and Yamashita [2] for gathering n autonomous mobile robots with limited viewing range in the plane. Remarquably, this algorithm exploits the fact that during its execution, many collisions of robots occur. Such collisions are interpreted as a success because it is assumed that such collided robots behave the same from now on. This is acceptable under the assumption that each robot is represented by a single point. Otherwise, collisions should be avoided. In this paper, we consider a continuous Go-To-The-Center algorithm in which the robots continuously observe the positions of their neighbors and adapt their speed (assuming a speed limit) and direction. Our first results are time bounds of O (n 2) for gathering in two dimensions Euclidean space, and Θ (n) for the one dimension. Our main contribution is the introduction and evaluation of a continuous algorithm which performs Go-To-The-Center considering only the neighbors of a robot with respect to the Gabriel subgraph of the visibility graph, i.e. Go-To-The-Gabriel-Center algorithm. We show that this modification still correctly executes gathering in one and two dimensions, with the same time bounds as above. Simulations exhibit a severe difference of the behavior of the Go-To-The-Center and the Go-To-The-Gabriel-Center algorithms: Whereas lots of collisions occur during a run of the Go-To-The-Center algorithm, typically only one, namely the final collision occurs during a run of the Go-To-The-Gabriel-Center algorithm. We can prove this "collisionless property" of the Go-To-The-Gabriel-Center algorithm for one dimension. In two-dimensional Euclidean space, we conjecture that the "collisionless property" holds for almost every initial configuration. We support our conjecture with measurements obtained from the simulation where robots execute both continuous Go-To-The-Center and Go-To-The-Gabriel-Center algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Approximation of set multi-cover via hypergraph matching.
- Author
-
Gorgi, Abbass, El Ouali, Mourad, Srivastav, Anand, and Hachimi, Mohamed
- Subjects
- *
HYPERGRAPHS , *APPROXIMATION algorithms , *ALGORITHMS , *LOGICAL prediction - Abstract
For a given b ∈ N n and a hypergraph H = (V , E) with maximum degree Δ, the b -set multicover problem which we are concerned with in this paper may be stated as follows: find a minimum cardinality subset C ⊆ E such that no vertex v i ∈ V is contained in less than b i hyperedges of C. Peleg, Schechtman, and Wool (1997) conjectured that for any fixed Δ and b = min i b i , the problem cannot be approximated with a ratio strictly smaller than δ : = Δ − b + 1 , unless P = NP. In this paper, we show that the conjecture of Peleg et al. is not true on ℓ -uniform hypergraphs by presenting a polynomial-time approximation algorithm based on a matching/covering duality for hypergraphs due to Ray-Chaudhuri (1960), which we convert into an approximative form. The given algorithm yields a ratio smaller than (1 − b (ℓ + 1) Δ) Δ b. Moreover, we prove that the lower bound conjectured by Peleg et al. holds for regular hypergraphs under the unique games conjecture. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Vertex deletion on split graphs: Beyond 4-hitting set.
- Author
-
Choudhary, Pratibha, Jain, Pallavi, Krithika, R., and Sahlot, Vibha
- Subjects
- *
ALGORITHMS , *STATISTICAL decision making , *KERNEL functions , *APPROXIMATION algorithms - Abstract
Vertex deletion problems on graphs involve finding a set of minimum number of vertices whose deletion results in a graph with some specific property. In a recent study on vertex deletion problems on split graphs, it was shown that transforming a split graph to a block graph or a threshold graph using minimum number of vertex deletions is NP -hard. We call the decision version of these problems as Split to Block Vertex Deletion (SBVD) and Split to Threshold Vertex Deletion (STVD), respectively. In this paper, we study the parameterized complexity of these problems with the number of vertex deletions, k , as a parameter. These problems are implicit 4- Hitting Set and thus admit an algorithm with running time O ⋆ (3.0755 k) , a kernel with O (k 3) vertices, and a 4-approximation algorithm. In this paper, we exploit the structural properties of the concerned graph classes and obtain a kernel for SBVD with O (k 2) vertices and FPT algorithms for SBVD and STVD with running times O ⋆ (2.3028 k) and O ⋆ (2.7913 k) , respectively. We further give improved approximation algorithms for SBVD and STVD. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Quantum algorithm for the multicollision problem.
- Author
-
Hosoyamada, Akinori, Sasaki, Yu, Tani, Seiichiro, and Xagawa, Keita
- Subjects
- *
ALGORITHMS , *RANDOM functions (Mathematics) , *SET functions , *CRYPTOGRAPHY - Abstract
The current paper presents a new quantum algorithm for finding multicollisions, often denoted by ℓ -collisions, where an ℓ -collision for a function is a set of ℓ distinct inputs that are mapped by the function to the same value. In cryptology, it is important to study how many queries are required to find an ℓ -collision for a random function of which domain is larger than its range. However, the problem of finding ℓ -collisions for random functions has not received much attention in the quantum setting. The tight bound of quantum query complexity for finding a 2-collisions of a random function has been revealed to be Θ (N 1 / 3) , where N is the size of the range of the function, but neither the lower nor upper bounds are known for general ℓ -collisions. The paper first integrates the results from existing research to derive several new observations, e.g., ℓ -collisions can be generated only with O (N 1 / 2) quantum queries for any integer constant ℓ. It then provides a quantum algorithm that finds an ℓ -collision for a random function with the average quantum query complexity of O (N (2 ℓ − 1 − 1) / (2 ℓ − 1)) , which matches the tight bound of Θ (N 1 / 3) for ℓ = 2 and improves upon the known bounds, including the above simple bound of O (N 1 / 2). More generally, the algorithm achieves the average quantum query complexity of O (c N ⋅ N (2 ℓ − 1 − 1) / (2 ℓ − 1)) , and runs over O ˜ (c N ⋅ N (2 ℓ − 1 − 1) / (2 ℓ − 1)) qubits in O ˜ (c N ⋅ N (2 ℓ − 1 − 1) / (2 ℓ − 1)) expected time for a random function F : X → Y such that | X | ≥ ℓ ⋅ | Y | / c N for any 1 ≤ c N ∈ o (N 1 / (2 ℓ − 1)) , where it is assumed that QRAM is available. With the same query complexity, it is actually able to find a multiclaw for random functions, which is harder to find than a multicollision. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Deterministic normal position transformation and its applications.
- Author
-
M.-Alizadeh, Benyamin and Hashemi, Amir
- Subjects
- *
DETERMINISTIC algorithms , *ALGORITHMS , *LINEAR algebra , *GROBNER bases , *POLYNOMIALS - Abstract
In this paper, we address the problem of transforming an ideal into normal position. We present a deterministic algorithm (based on linear algebra techniques) that finds a suitable linear change of variables to transform a given zero-dimensional (not necessarily radical) ideal into normal position. The main idea of this algorithm is to achieve this position via a sequence of elementary linear changes; i.e. each change involves only two variables. Furthermore we give complete complexity analysis of all the algorithms described in this paper based on a sharp upper bound for the maximal number of required elementary linear changes in the special case of the bi-variate polynomial ideal. As an application of this investigation, we conclude the paper by giving a sharper complexity bound for computing a primary decomposition for a zero-dimensional ideal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Hardness results for three kinds of colored connections of graphs.
- Author
-
Huang, Zhong and Li, Xueliang
- Subjects
- *
GRAPH connectivity , *HARDNESS , *COMPUTATIONAL complexity , *CHROMATIC polynomial , *ALGORITHMS - Abstract
The concept of rainbow connection number of a graph was introduced by Chartrand et al. in 2008. Inspired by this concept, other concepts on colored version of connectivity in graphs were introduced, such as the monochromatic connection number by Caro and Yuster in 2011, the proper connection number by Borozan et al. in 2012, and the conflict-free connection number by Czap et al. in 2018, as well as some other variants of connection numbers later on. Chakraborty et al. proved that to compute the rainbow connection number of a graph is NP-hard. For a long time, it has been tried to fix the computational complexity for the monochromatic connection number, the proper connection number and the conflict-free connection number of a graph. However, it has not been solved yet. Only the complexity results for the strong version, i.e., the strong proper connection number and the strong conflict-free connection number, of these connection numbers were determined to be NP-hard. In this paper, we prove that to compute each of the monochromatic connection number, the proper connection number and the conflict free connection number for a graph is NP-hard. This solves a long standing problem in this field, asked in many talks of workshops and papers. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. A fast algorithm for maximizing a non-monotone DR-submodular integer lattice function.
- Author
-
Nong, Qingqin, Fang, Jiazhu, Gong, Suning, Feng, Yan, and Qu, Xiaoying
- Subjects
- *
ALGORITHMS , *INTEGERS , *APPROXIMATION algorithms - Abstract
In this paper we consider the problem of maximizing a non-monotone and non-negative DR-submodular function on a bounded integer lattice [ B → ] = { (x 1 , ... , x n) ∈ Z + n : 0 ≤ x k ≤ B k , ∀ 1 ≤ k ≤ n } without any constraint, where B → = (B 1 , ... , B n) ∈ Z + n. We design an algorithm for the problem and measure its performance by its approximation ratio and the number of value oracle queries it needs, where the latter one is the dominating term in the running time of an algorithm. It has been showed that, for the problem considered, any algorithm achieving an approximation ratio greater than 1 2 requires an exponential number of value oracle queries. In the literature there are two algorithms that reach 1 2 approximation guarantee. The first algorithm needs O (n | | B | | ∞) oracle queries. The second one reduces its number of oracle queries to O (n max { 1 , log | | B → | | ∞ }) but it needs large storage. In this paper we present a randomized approximation algorithm that has an approximation guarantee of 1 2 , calls O (n max { 1 , log | | B → | | ∞ }) oracle queries and does not need large storage, improving the results of the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. Exact algorithms for the repetition-bounded longest common subsequence problem.
- Author
-
Asahiro, Yuichi, Jansson, Jesper, Lin, Guohui, Miyano, Eiji, Ono, Hirotaka, and Utashima, Tadatoshi
- Subjects
- *
ALGORITHMS , *DYNAMIC programming - Abstract
• Bounded-Repetition Longest Common Subsequence problem is introduced. • Exact, exponential-time algorithms are presented. • NP-hardness and APX-hardness results for the problem on restricted instances are shown. In this paper, we study exact, exponential-time algorithms for a variant of the classic Longest Common Subsequence problem called the Repetition-Bounded Longest Common Subsequence problem (or RBLCS , for short): Let an alphabet S be a finite set of symbols and an occurrence constraint C o c c be a function C o c c : S → N , assigning an upper bound on the number of occurrences of each symbol in S. Given two sequences X and Y over the alphabet S and an occurrence constraint C o c c , the goal of RBLCS is to find a longest common subsequence of X and Y such that each symbol s ∈ S appears at most C o c c (s) times in the obtained subsequence. The special case where C o c c (s) = 1 for every symbol s ∈ S is known as the Repetition-Free Longest Common Subsequence problem (RFLCS) and has been studied previously; e.g., in [1] , Adi et al. presented a simple (exponential-time) exact algorithm for RFLCS. However, they did not analyze its time complexity in detail, and to the best of our knowledge, there are no previous results on the running times of any exact algorithms for this problem. Without loss of generality, we will assume that | X | ≤ | Y | and | X | = n. In this paper, we first propose a simpler algorithm for RFLCS based on the strategy used in [1] and show explicitly that its running time is O (1.44225 n). Next, we provide a dynamic programming (DP) based algorithm for RBLCS and prove that its running time is O (1.44225 n) for any occurrence constraint C o c c , and even less in certain special cases. In particular, for RFLCS , our DP-based algorithm runs in O (1.41422 n) time, which is faster than the previous one. Furthermore, we prove NP-hardness and APX-hardness results for RBLCS on restricted instances. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. A semantic relatedness preserved subset extraction method for language corpora based on pseudo-Boolean optimization.
- Author
-
Dong, Luobing, Guo, Qiumin, Wu, Weili, and Satpute, Meghana N.
- Subjects
- *
CORPORA , *ALGORITHMS , *CHARACTERISTIC functions , *BOOLEAN functions , *ARTIFICIAL intelligence , *LANGUAGE & languages - Abstract
As language corpora have been playing an increasingly important role in the field of Artificial Intelligence (AI) research, lots of extremely large corpora are created. However, a larger corpora size not only increases power and accuracy but also brings redundancy. Therefore, researchers began to emphasize the study of appropriate subset extraction methods. Due to the trade-off between data sufficiency and redundancy, a group of interesting and challenging problems are emerged that are studied in this paper: (1) How to make the resulting subset include as much data as possible under some necessary constraints? (2) How to preserve the potential useful semantic relatedness included in the original corpora while reducing the size of the corpora? For these two problems, existing work mainly focuses on the methods to construct particular subsets for special usage. These methods are limited in their focus. In this paper, we try to address the problems listed above. First, considering the cubic and binary semantic relatedness among tokens, we construct a general system model and formulate the mix problem as a cubic pseudo-Boolean optimization problem. Then, by analyzing the characteristics of the objective function, we transfer the problem into the maximum flow problem of a corresponding graph. Third, we propose a new algorithm by introducing discrete Lagrangian iteration method. We prove that the objective function is supermodular, which allows us to use fast minimum cut algorithms in each iteration step to propose another fast algorithm. Finally, we experimentally validate our new algorithms on several randomly created corpora. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
28. Leader-based de-anonymization of an anonymous read/write memory.
- Author
-
Godard, Emmanuel, Imbs, Damien, Raynal, Michel, and Taubenfeld, Gadi
- Subjects
- *
MEMORY , *ALGORITHMS , *WORK values , *SYMMETRY breaking , *INTEGER programming - Abstract
A new notion of anonymity was recently introduced at PODC 2017, namely, anonymity on the names of the registers that define the shared memory. As an example, a shared register named A by a process p and a shared register named B by another process q may correspond to the very same shared register X , while the same name C may correspond to different shared registers for p and q. Considering an asynchronous n -process anonymous shared memory system, this paper is concerned with the de-anonymization of the memory, i.e., at the end of the execution of a de-anonymization algorithm, the processes must agree on the same name for each shared register, and different shared registers must have different names. To this end, the paper first addresses leader election in an anonymous memory system. Let n be the number of processes and m the size of the anonymous memory (total number of anonymous registers). It is first shown that there is no election algorithm when the number of anonymous registers is a multiple of n. Then, assuming m = α n + β , where α is a positive integer, three election algorithms are presented, which consider the cases β = 1 , β = n − 1 , and β ∈ M (n) , where the set M (n) characterizes the values for which mutual exclusion can be solved despite memory anonymity. Once election is solved, a general (and simple) de-anonymization algorithm is presented, which takes as a subroutine any memory anonymous leader election algorithm. Hence, any instance of this algorithm works for the values of m required by the selected underlying election algorithm. As the underlying election algorithms, the de-anonymization algorithm is symmetric in the sense that process identities can only be compared for equality. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Approximation algorithm for (connected) bounded-degree deletion problem on unit disk graphs.
- Author
-
Liu, Pengcheng, Zhang, Zhao, and Huang, Xiaohui
- Subjects
- *
ALGORITHMS , *GRAPH connectivity , *ARBITRARY constants , *APPROXIMATION algorithms - Abstract
In this paper, we study the minimum (connected) k -bounded-degree node deletion problem (Min(C) k BDND). For a connected graph G , a constant k and a weight function w : V → R + , a vertex set C ⊆ V (G) is a k BDND-set if the maximum degree of graph G − C is at most k. If furthermore, the subgraph of G induced by C is connected, then C is a C k BDND-set. The goal of MinW k BDND (resp. MinWC k BDND) is to find a k BDND-set (resp. C k BDND-set) with the minimum weight. In this paper, we focus on their cardinality versions with w (v) ≡ 1 , v ∈ V , which are denoted as Min k BDND and MinC k BDND. This paper presents a (1 + ε) and a 3.76-approximation algorithm for Min k BDND and MinC k BDND on unit disk graphs, respectively, where 0 < ε < 1 is an arbitrary constant. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Two-dimensional maximal repetitions.
- Author
-
Amir, Amihood, Landau, Gad M., Marcus, Shoshana, and Sokol, Dina
- Subjects
- *
ALGORITHMS - Abstract
Maximal repetitions or runs in strings have a wide array of applications and thus have been extensively studied. In this paper, we extend this notion to 2-dimensions, precisely defining a maximal 2D repetition. We provide initial bounds on the number of maximal 2D repetitions that can occur in an n × n array. The main contribution of this paper is the presentation of the first algorithm for locating all maximal 2D repetitions. The algorithm is efficient and straightforward, with runtime O (n 2 log n + ρ) , where n 2 is the size of the input array and ρ is the number of maximal 2D repetitions in the output. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. An [formula omitted] query time algorithm for reducing ϵ-NN to (c,r)-NN.
- Author
-
Ma, Hengzhao and Li, Jianzhong
- Subjects
- *
SPACETIME , *ALGORITHMS , *COMPUTER science - Abstract
The problem of reducing ϵ -NN to (c , r) -NN is very important in theoretical computer science area and has attracted many research efforts. In this paper a new algorithm for such problem is proposed, which achieves O (log n) query time. Comparing with the former algorithms for the same problem, this query time is the lowest, which is the main contribution of this paper. The low query time complexity is achieved by raising the preprocessing time and space complexity. How to reduce the cost added into the two complexities is also discussed in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Order based algorithms for the core maintenance problem on edge-weighted graphs.
- Author
-
Zhang, Feiteng, Liu, Bin, Liu, Zhenming, and Fang, Qizhi
- Subjects
- *
ALGORITHMS , *INTEGERS , *WEIGHTED graphs - Abstract
The cohesive subgraph k-core is a maximal connected subgraph with the minimum degree δ ≥ k of a simple graph, where integer k ≥ 0. Define the core number of a vertex w as the maximum k such that w is contained in a k -core. The core decomposition problem which is calculating the core numbers of all vertices in static graphs, and the core maintenance problem which is updating the core numbers in dynamic graphs are our main concern. Although, core numbers can be updated by the core decomposition algorithms, only a small part of vertices' core numbers have changed after the change of a graph. Thus, it is necessary to update core numbers locally to reduce the cost. In this paper, we study the core maintenance problem on edge-weighted graphs by using the vertex sequence k-order which is ordered by the order that the core decomposition algorithm removes vertices. We design the core maintenance algorithms for inserting one edge at a time and the method of updating the k -order, which reduce the searching range and the time cost evidently. For the removing case, we use the existing subcore algorithm to do the core maintenance and modify it with the method of updating k -order we design. Finally, we do extensive experiments to evaluate the effectiveness and the efficiency of our algorithms, which shows that the order based algorithm has a better performance than the existing for the core maintenance on edge-weighted graphs. • We solve the core maintenance problem on edge-weighted graphs with a tool named weighted k -order. • The weighted k -order, a vertex sequence, is defined on edge-weighted graphs for the first time. • We give the properties of vertices whose core numbers increase on the k-order for inserting case on edge-weighted graphs. • The order based core maintenance algorithm for inserting case on edge-weighted graphs is given, which is the best. • We give the methods to update the k-order after inserting or removing one edge for the next core maintenance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Compact suffix automata representations for searching long patterns.
- Author
-
Faro, Simone and Scafiti, Stefano
- Subjects
- *
SUFFIXES & prefixes (Grammar) , *ROBOTS , *ALGORITHMS - Abstract
Automata based algorithms have always dominated the history of string matching, thanks to their intrinsic nature of recognizers. Among them, those based on the suffix automaton deserve a special mention, because its use has always led to elegant and very efficient solutions in practice. In this paper, we present two novel string matching algorithms based on the suffix automaton which are particularly suitable for very long strings. The first algorithm simulates the suffix automaton through the well known bit-parallelism approach, while the second makes use of an alternative non-standard simulation. We mainly focus on the second solution, which turns out to be more flexible in practice, and present some extensions of it. Experimental results suggest that both our solutions are very efficient in practical cases scaling much better when the length of the pattern increases. Moreover, our second approach is also competitive with the most effective solutions available for the exact string matching problem. • We present two novel suffix automata based algorithms specifically designed for very long strings. • We describe a generic non-standard approach for simulating the suffix automaton of a string. • Our first solution is a bit-parallel based extension of the BNDM algorithm which can be encoded using few bits. • Our second solution is related to the occurrence of unique factors in the pattern. • Experimental results suggest that both our solutions are very efficient in practical cases. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. Faster parameterized algorithms for two vertex deletion problems.
- Author
-
Tsur, Dekel
- Subjects
- *
ALGORITHMS , *GRAPH algorithms , *INTEGERS - Abstract
In the l -Path Vertex Cover problem (resp., the l -Component Order Connectivity problem) the input is an undirected graph G and an integer k. The goal is to decide whether there is a set of vertices of size at most k whose deletion from G results in a graph that does not contain a path with l vertices (resp., does not contain a connected component with at least l vertices). In this paper we give a parameterized algorithm for l -Path Vertex Cover when l = 5 , 6 , 7 , whose running times are O ⁎ (3.945 k) , O ⁎ (4.947 k) , and O ⁎ (5.951 k) , respectively. We also give an algorithm for l -Component Order Connectivity whose running time is O ⁎ ((l − 1 − ϵ l) k) for every l ≥ 4 , where ϵ l > 0 for every l. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. How majority-vote crossover and estimation-of-distribution algorithms cope with fitness valleys.
- Author
-
Witt, Carsten
- Subjects
- *
MATHEMATICAL analysis , *ALGORITHMS , *EVOLUTIONARY computation , *VALLEYS , *EVOLUTIONARY algorithms - Abstract
The benefits of using crossover in crossing fitness gaps have been studied extensively in evolutionary computation. Recent runtime results show that majority-vote crossover is particularly efficient at optimizing the well-known Jump benchmark function that includes a fitness gap next to the global optimum. Also estimation-of-distribution algorithms (EDAs), which use an implicit crossover, are much more efficient on Jump than typical mutation-based algorithms. However, the allowed gap size for polynomial runtimes with EDAs is at most logarithmic in the problem dimension n. In this paper, we investigate variants of the Jump function where the gap is shifted and appears in the middle of the typical search trajectory. Such gaps can still be overcome efficiently in time O (n log n) by majority-vote crossover and an estimation-of-distribution algorithm, even for gap sizes almost n. However, if the global optimum is located in the gap instead of the usual all-ones string, majority-vote crossover would nevertheless approach the all-ones string and be highly inefficient. In sharp contrast, an EDA can still find such a shifted optimum efficiently. Thanks to a general property called fair sampling , the EDA will with high probability sample from almost every fitness level of the function, including levels in the gap, and sample the global optimum even though the overall search trajectory points towards the all-ones string. Finally, we derive limits on the gap size allowing efficient runtimes for the EDA. • We study 2 algorithms using majority-vote crossover and an estimation-of-distribution algorithm on modified jump functions. • We derive theorems on the algorithms' runtime using rigorous mathematical analyses. • All 3 algorithms can overcome the fitness gap of the jump functions efficiently for moderate sizes of the gap. • All but the estimation-of-distribution algorithm usually fail to find an optimum located within the gap. • The estimation-of-distribution is efficient since it samples fairly on all fitness levels towards the optimum. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Execution trace sets for real computation.
- Author
-
Thompson, Declan
- Subjects
- *
ALGORITHMS , *MACHINERY - Abstract
• Analyses general algorithms as sets of sequences (tasks) over a domain. • Defines an equivalence relation on sets of sequences to recover algorithmic tasks. • Characterises the execution trace sets of Blum-Shub-Smale machines. • Compares trace-based approaches to algorithms to iterator-based accounts. Traditional models of computation and algorithms take a constructive approach, essentially considering a transition function over a given state space. Yet algorithms may also be considered in terms of the observed behaviour they generate. This paper marries these perspectives with a domain-independent, trace based account of computational and algorithmic processes. We focus on the model of real computation due to Blum, Shub and Smale [3] and provide a precise characterisation of the execution trace sets generated by their machines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
37. An algorithmic construction of union-intersection-bounded families.
- Author
-
Fernández, Marcel, Livieratos, John, and Martín, Sebastià
- Subjects
- *
FAMILY size , *FAMILIES , *POLYNOMIALS , *ALGORITHMS - Abstract
In this paper, we present lower bounds and algorithmic constructions of union-intersection-bounded families of sets. The lower bound is established using the Lovász Local Lemma. This bound matches the best known bound for the size of union-intersection-bounded families of sets. We then use the variable framework for the Lovász Local Lemma, to discuss an algorithm that outputs explicit constructions that attain the lower bound. The algorithm has polynomial complexity in the number of points in the family. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Stand-up indulgent gathering on lines.
- Author
-
Bramas, Quentin, Kamei, Sayaka, Lamani, Anissa, and Tixeuil, Sébastien
- Subjects
- *
ROBOTS , *MULTIPLICITY (Mathematics) , *ALGORITHMS - Abstract
We consider a variant of the crash-fault gathering problem called stand-up indulgent gathering (SUIG). In this problem, a group of mobile robots must eventually gather at a single location, which is not known in advance. If no robots crash, they must all meet at the same location. However, if one or more robots crash at a single location, all non-crashed robots must eventually gather at that location. The SUIG problem was first introduced for robots operating in a two-dimensional continuous Euclidean space, with most solutions relying on the ability of robots to move a prescribed (real) distance at each time instant. In this paper, we investigate the SUIG problem for robots operating in a discrete universe (i.e., a graph) where they can only move one unit of distance (i.e., to an adjacent node) at each time instant. Specifically, we focus on line-shaped networks and characterize the solvability of the SUIG problem for oblivious robots without multiplicity detection. • The Standup Indulgent Gathering is not solvable with semi-synchronous agents in line graphs. • The Standup Indulgent Gathering is not solvable with synchronous agents starting from edge-symmetric configuration. • We provide an algorithm solving the Standup Indulgent Gathering when it is possible. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Linear time online algorithms for constructing linear-size suffix trie.
- Author
-
Hendrian, Diptarama, Takagi, Takuya, Inenaga, Shunsuke, Goto, Keisuke, and Funakoshi, Mitsuru
- Subjects
- *
ONLINE algorithms , *DATA structures , *SUFFIXES & prefixes (Grammar) , *ALGORITHMS , *SIGNS & symbols - Abstract
The suffix trees are fundamental data structures for various kinds of string processing. The suffix tree of a text string T of length n has O (n) nodes and edges, and the string label of each edge is encoded by a pair of positions in T. Thus, even after the tree is built, the input string T needs to be kept stored and random access to T is still needed. The linear-size suffix tries (LSTs), proposed by Crochemore et al. [Linear-size suffix tries, TCS 638:171-178, 2016], are a "stand-alone" alternative to the suffix trees. Namely, the LST of an input text string T of length n occupies O (n) total space, and supports pattern matching and other tasks with the same efficiency as the suffix tree without the need to store the input text string T. Crochemore et al. proposed an offline algorithm which transforms the suffix tree of T into the LST of T in O (n log σ) time and O (n) space, where σ is the alphabet size. In this paper, we present two types of online algorithms which "directly" construct the LST, from right to left, and from left to right, without constructing the suffix tree as an intermediate structure. Both algorithms construct the LST incrementally when a new symbol is read, and do not access the previously read symbols. Both of the right-to-left construction algorithm and the left-to-right construction algorithm work in O (n log σ) time and O (n) space. The main feature of our algorithms is that the input text string does not need to be stored. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Fine-grained polynomial functional encryption.
- Author
-
Zhu, Ziqi, Gong, Junqing, Wang, Yuyu, and Qian, Haifeng
- Subjects
- *
POLYNOMIALS , *CRYPTOGRAPHY , *ALGORITHMS - Abstract
In this work, we present fine-grained secure polynomial functional encryption (PFE) for degree d ≥ 1 over a field F : a ciphertext encrypts x ∈ F n , a key is associated with a degree- d polynomial P and decryption recovers P (x) ∈ F. Fine-grained cryptographic primitives are secure against a resources bounded class of adversaries and computed by honest users with less resources than adversaries. In this paper, we construct the fine-grained PFE in these two fine-grained settings: (1) NC 1 PFE: Based on the worst-case assumption NC 1 ⊊ ⊕ L / poly , we construct public-key polynomial functional encryption and achieve (i) selective simulation-based security and (ii) static function-hiding against adversary in NC 1 where all honest algorithms are computable in AC 0 [ 2 ] and ciphertext sizes are O (n). (2) AC 0 PFE: We construct a private-key polynomial functional encryption achieve unconditionally selective simulation-based security against adversary in AC 0 where all honest algorithms are computable in AC 0 and ciphertext sizes are O (n). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Decision algorithms for reversibility of 1D cellular automata under reflective boundary conditions.
- Author
-
Ma, Junchi, Wang, Chen, Chen, Weilin, Lin, Defu, and Wang, Chao
- Subjects
- *
CELLULAR automata , *ALGORITHMS , *DECISION making - Abstract
Reversibility is one of the most significant properties of cellular automata (CA). In this paper, we focus on the reversibility of one-dimensional finite CA under reflective boundary conditions (RBC). We present two algorithms for deciding the reversibility of one-dimensional CA under RBC. Both algorithms work for not only linear rules but also non-linear rules. The first algorithm is to determine what we call the "strict reversibility" of CA. The second algorithm is to compute what we call the "reversibility function" of CA. Reversibility functions are proved to be periodic. Based on the algorithms, we list some experiment results of one-dimensional CA under RBC and analyse some features of this family of CA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Optimal algorithms for preemptive two-agent scheduling on uniform parallel machines.
- Author
-
Gu, Manzhan, Liu, Peihai, and Lu, Xiwen
- Subjects
- *
PRODUCTION scheduling , *INDEPENDENT sets , *SCHEDULING , *MACHINERY , *ALGORITHMS - Abstract
This paper studies two scheduling problems involving two agents A and B , where each agent has an independent set of jobs and all jobs are to be processed preemptively on uniform parallel machines. The aims of the two problems are to minimize the objective function value of agent A with the makespan of agent B not exceeding a threshold T. In the first problem, there are m machines and the objective function of agent A is the makespan. In the second problem, there are two machines and the objective function of agent A is the total completion time. For each of the two problems, we propose an optimal algorithm and obtain some extended results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Delaunay property and proximity results of the L-algorithm for digital plane probing.
- Author
-
Lu, Jui-Ting, Roussillon, Tristan, Lachaud, Jacques-Olivier, and Coeurjolly, David
- Subjects
- *
GEOMETRIC surfaces , *POINT set theory , *ALGORITHMS - Abstract
When processing the geometry of digital surfaces (boundaries of voxel sets), linear local structures such as pieces of digital planes play an important role. To capture such geometrical features, plane-probing algorithms have demonstrated their strength: starting from an initial triangle, the digital structure is locally probed to expand the triangle approximating the plane parameters more and more precisely (converging to the exact parameters for infinite digital planes). Among the different plane-probing algorithms, the L-algorithm is a plane-probing algorithm variant which takes into account a generally larger neighborhood of points for its update process. We show in this paper that this algorithm has the advantage to guarantee the so-called Delaunay property of the set of probing points, which has interesting consequences: it provides a minimal basis of the plane and guarantees an as-local-as-possible computation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An algorithm for the secure total domination problem in proper interval graphs.
- Author
-
Araki, Toru and Aita, Yasufumi
- Subjects
- *
POLYNOMIAL time algorithms , *DOMINATING set , *ALGORITHMS - Abstract
A subset S of vertices of G is a total dominating set if, for any vertex v , there is a vertex in S adjacent to v. A total dominating set S is a secure total dominating set if, for any vertex v ∉ S , there is a vertex u ∈ S such that uv is an edge and (S ∖ { u }) ∪ { v } is also a total dominating set. In this paper, we design an O (m) -time algorithm for computing a minimum secure total dominating set in a proper interval graph, where m is the number of edges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Optimal analysis for bandit learning in matching markets with serial dictatorship.
- Author
-
Wang, Zilong and Li, Shuai
- Subjects
- *
TIME perspective , *COMPUTER science , *ROBBERS , *REGRET , *ALGORITHMS - Abstract
The problem of two-sided matching markets is well-studied in computer science and economics, owing to its diverse applications across numerous domains. Since market participants are usually uncertain about their preferences in various online matching platforms, an emerging line of research is dedicated to the online setting where one-side participants (players) learn their unknown preferences through multiple rounds of interactions with the other side (arms). Sankararaman et al. [23] provide an Ω (N log (T) Δ 2 + K log (T) Δ) regret lower bound for this problem under serial dictatorship assumption, where N is the number of players, K (≥ N) is the number of arms, Δ is the minimum reward gap across players and arms, and T is the time horizon. Serial dictatorship assumes arms have the same preferences, which is common in reality when one side participants have a unified evaluation standard. Recently, the work of Kong and Li [10] proposes the ET-GS algorithm and achieves an O (K log (T) Δ 2 ) regret upper bound, which is the best upper bound attained so far. Nonetheless, a gap between the lower and upper bounds, ranging from N to K , persists. It remains unclear whether the lower bound or the upper bound needs to be improved. In this paper, we propose a multi-level successive selection algorithm that obtains an O (N log (T) Δ 2 + K log (T) Δ) regret bound when the market satisfies serial dictatorship. To the best of our knowledge, we are the first to propose an algorithm that matches the lower bound in the problem of matching markets with bandits. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A deterministic approximation algorithm for metric triangle packing.
- Author
-
Zhao, Jingyang and Xiao, Mingyu
- Subjects
- *
APPROXIMATION algorithms , *DETERMINISTIC algorithms , *COMPLETE graphs , *ALGORITHMS , *TRIANGLES - Abstract
Given an edge-weighted metric complete graph with n vertices, the maximum weight metric triangle packing problem is to find a set of n / 3 vertex-disjoint triangles with the total weight of all triangles in the packing maximized. Several simple methods can lead to a 2/3-approximation ratio. However, this barrier is not easy to break. Chen et al. proposed a randomized approximation algorithm with an expected ratio of (0.66768 − ε) for any constant ε > 0. In this paper, we improve the approximation ratio to (0.66835 − ε). Furthermore, we can derandomize our algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Tight competitive analyses of online car-sharing problems.
- Author
-
Liang, Ya-Chun, Lai, Kuan-Yun, Chen, Ho-Lin, Iwama, Kazuo, and Liao, Chung-Shou
- Subjects
- *
CAR sharing , *ALGORITHMS , *AUTOMOBILES - Abstract
The online car-sharing problem finds many real-world applications. The problem, proposed by Luo, Erlebach and Xu in 2018, mainly focuses on an online model in which there are two locations: 0 and 1, and k total cars. Each request which specifies its pick-up time and pick-up location (among 0 and 1, and the other is the drop-off location) is released in each stage a fixed amount of time before its specified start (i.e. pick-up) time. The time between the booking (i.e. released) time and the start time is enough to move empty cars between 0 and 1 for relocation if they are not used in that stage. The model, called k S2L-F, assumes that requests in each stage arrive sequentially regardless of the same booking time and the decision (accept or reject) must be made immediately. The goal is to accept as many requests as possible. In spite of only two locations, the analysis does not seem easy and the (tight) competitive ratio (CR) is only known to be 2 for k = 2 and 1.5 for a restricted value of k , i.e., a multiple of three. In this paper, we remove all the holes of unknown CR's; namely we prove that the CR is 2 k k + ⌊ k / 3 ⌋ for all k ≥ 2. Furthermore, if the algorithm can delay its decision until all requests have come in each stage, the CR is improved to roughly 4/3. We can take this advantage even further; precisely we can achieve a CR of 2 + R 3 if the number of requests in each stage is at most Rk , 1 ≤ R ≤ 2 , where we do not have to know the value of R in advance. Finally we demonstrate that randomization also helps to get (slightly) better CR's, and prove some lower bounds to show the tightness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Streaming algorithms for monotone non-submodular function maximization under a knapsack constraint on the integer lattice.
- Author
-
Tan, Jingjing, Wang, Fengmin, Ye, Weina, Zhang, Xiaoqing, and Zhou, Yang
- Subjects
- *
INTEGERS , *MAXIMAL functions , *BACKPACKS , *ALGORITHMS , *EXPECTATION-maximization algorithms - Abstract
The study of non-submodular maximization on the integer lattice is an important extension of submodular optimization. In this paper, streaming algorithms for maximizing non-negative monotone non-submodular functions with knapsack constraint on integer lattice are considered. We first design a two-pass StreamingKnapsack algorithm combining with BinarySearch as a subroutine for this problem. By introducing the DR ratio γ and the weak DR ratio γ w of the non-submodular objective function, we obtain that the approximation ratio is min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } , the total memory complexity is O (K log K / ε) , and the total query complexity for each element is O (log K log (K / ε 2) / ε). Then, we design a one-pass streaming algorithm by dynamically updating the maximal function value among unit vectors along with the currently arriving element. Finally, in order to decrease the memory complexity, we design an improved StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). • A two-pass streaming algorithm is proposed for maximizing monotone non-submodular functions with knapsack constraint on integer lattice. • By introducing DR ratio γ d (weak DR ratio γ w) we obtain the approximation ratio as min { γ 2 (1 − ε) / 2 γ + 1 , 1 − 1 / γ w 2 γ − ε } with memory O (K log K / ε). • A one-pass streaming algorithm is designed by dynamically updating the maximal function value among unit vectors. • We design an improve StreamingKnapsack algorithm and reduce the memory complexity to O (K / ε 2). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
49. MUL-tree pruning for consistency and optimal reconciliation - complexity and algorithms.
- Author
-
Gascon, Mathieu, Dondi, Riccardo, and El-Mabrouk, Nadia
- Subjects
- *
RECONCILIATION , *TREE pruning , *GENETIC techniques , *ALGORITHMS , *GENE families , *PHYLOGENY - Abstract
A multi-labeled tree (or MUL-tree) is a rooted tree in which every leaf is labeled by an element from some set X , but in which more than one leaf may be labeled by the same element of X. MUL-trees have applications in many fields. In phylogenetics, they can represent the evolution of gene families, where genes are represented by the species they belong to, the non-uniqueness of leaf-labels coming from the fact that a given genome may contain many paralogous genes (genes related through duplication). In this paper, we consider two problems related to the leaf-pruning (leaf removal) of MUL-trees leading to single-labeled trees. First, given a set of MUL-trees, the MUL-tree Set Pruning for Consistency (MULSETPC) Problem asks for a pruning of each tree leading to a set of consistent trees, i.e. a collection of label-isomorphic single-labeled trees. Second, processing one tree at a time, the MUL-tree Pruning for Reconciliation (MULPR) Problem asks for a pruning that minimizes a "reconciliation" cost, i.e. an evolutionary cost of the gene family through duplication, given a known species tree. We show that MULSETPC is NP-hard and that MULPR is W[2]-hard when parameterized by the duplication cost, NP-hard when the number of occurrences of each label in a MUL-tree is bounded by two and that the optimization version of MULPR is hard to approximate within factor (1 − o (1)) ln | X |. We then develop a polynomial-time heuristic for MULPR and show its accuracy by comparing it to a brute-force exact method on a set of gene trees from the Ensembl Genome Browser. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Loosely-stabilizing maximal independent set algorithms with unreliable communications.
- Author
-
Dong, Rongcheng, Sudo, Yuichi, Izumi, Taisuke, and Masuzawa, Toshimitsu
- Subjects
- *
INDEPENDENT sets , *ALGORITHMS - Abstract
Self-stabilization is a promising paradigm for designing highly adaptive distributed systems. However, it cannot be realized when the communication between processes is unreliable with some constant probability. To circumvent such impossibility, this paper adopts the concept of loose-stabilization for the first time, which is a practical alternative of self-stabilization, and proposes three systematic approaches to realize loose-stabilization in the atomic-state model with probabilistically erroneous communications , namely, the redundant-state approach, the step-up approach, and the repetition approach. Further, we apply these approaches to design three corresponding loosely-stabilizing algorithms for the maximal independent set problem. • Self-stabilization solutions are precluded for most of non-trivial problems in probabilistic error models. • Loose-stabilization is a relaxed variant of self-stabilization and practically equivalent to it. • Loose-stabilization can circumvent the impossibilities of self-stabilization in probabilistic error models. • Adopt the concept of loose-stabilization in the atomic-state model for the first time. • Three loose-stabilizing algorithms for maximal independent set problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.