10 results on '"Subramani, K"'
Search Results
2. Farkas Bounds on Horn Constraint Systems.
- Author
-
Subramani, K., Wojciechowki, Piotr, and Velasquez, Alvaro
- Subjects
- *
OPERATIONS research , *LINEAR systems , *INTEGER programming - Abstract
In this paper, we analyze the copy complexity of unsatisfiable Horn constraint systems, under the ADD refutation system. Recall that a linear constraint of the form ∑ i = 1 n a i · x i ≥ b , is said to be a horn constraint if all the a i ∈ { 0 , 1 , - 1 } and at most one of the a i s is positive. A conjunction of such constraints is called a Horn constraint system (HCS). Horn constraints arise in a number of domains including, but not limited to, program verification, power systems, econometrics, and operations research. The ADD refutation system is both sound and complete. Additionally, it is the simplest and most natural refutation system for refuting the feasibility of a system of linear constraints. The copy complexity of an infeasible linear constraint system (not necessarily Horn) in a refutation system, is the minimum number of times each constraint needs to be replicated, in order to obtain a read-once refutation. We show that for an HCS with n variables and m constraints, the copy complexity is at most 2 n - 1 , in the ADD refutation system. Additionally, we analyze bounded-width HCSs from the perspective of copy complexity. Finally, we provide an empirical analysis of an integer programming formulation of the copy complexity problem in HCSs. (An extended abstract was published in FroCos 2021 [26].) [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Constrained read-once refutations in UTVPI constraint systems: A parallel perspective.
- Author
-
Subramani, K. and Wojciechowski, Piotr
- Subjects
LINEAR systems ,BABY strollers ,ALGORITHMS ,OPERATIONS research - Abstract
In this paper, we analyze two types of refutations for Unit Two Variable Per Inequality (UTVPI) constraints. A UTVPI constraint is a linear inequality of the form: $a_{i}\cdot x_{i}+a_{j} \cdot x_{j} \le b_{k}$ , where $a_{i},a_{j}\in \{0,1,-1\}$ and $b_{k} \in \mathbb{Z}$. A conjunction of such constraints is called a UTVPI constraint system (UCS) and can be represented in matrix form as: ${\bf A \cdot x \le b}$. UTVPI constraints are used in many domains including operations research and program verification. We focus on two variants of read-once refutation (ROR). An ROR is a refutation in which each constraint is used at most once. A literal-once refutation (LOR), a more restrictive form of ROR, is a refutation in which each literal ( $x_i$ or $-x_i$) is used at most once. First, we examine the constraint-required read-once refutation (CROR) problem and the constraint-required literal-once refutation (CLOR) problem. In both of these problems, we are given a set of constraints that must be used in the refutation. RORs and LORs are incomplete since not every system of linear constraints is guaranteed to have such a refutation. This is still true even when we restrict ourselves to UCSs. In this paper, we provide NC reductions between the CROR and CLOR problems in UCSs and the minimum weight perfect matching problem. The reductions used in this paper assume a CREW PRAM model of parallel computation. As a result, the reductions establish that, from the perspective of parallel algorithms, the CROR and CLOR problems in UCSs are equivalent to matching. In particular, if an NC algorithm exists for either of these problems, then there is an NC algorithm for matching. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Integer Feasibility and Refutations in UTVPI Constraints Using Bit-Scaling.
- Author
-
Subramani, K. and Wojciechowski, Piotr
- Subjects
- *
INTEGERS , *LOGIC programming , *OPERATIONS research - Abstract
This paper is concerned with the design and analysis of a bit-scaling based algorithm for the problem of checking integer feasibility in a system of unit two variable per inequality (UTVPI) constraints (IF). The insights developed during algorithm design result in new techniques for extracting refutations of integer feasibility in such systems. Recall that a UTVPI constraint is a linear constraint of the form: a i · x i + a j · x j ≤ b ij , where a i , a j ∈ { 0 , 1 , - 1 } and b ij ∈ Z . These constraints arise in a number of application domains including but not limited to program verification (array bounds checking and abstract interpretation), operations research (packing and covering), and logic programming. Over the years, several algorithms have been proposed for the IF problem. Most of these algorithms are based on two inference rules, viz. the transitive rule and the tightening rule. None of these algorithms are bit-scaling. In other words, the running times of these algorithms are parameterized only by the number of variables and the number of constraints in the UTVPI constraint system (UCS) and not by the sizes of input constants. We introduce a novel algorithm for the IF problem, which is based on a collection of new insights. These insights are used to design a new bit-scaling algorithm for IF that runs in O (n · m · log 2 C) time, where n denotes the number of variables, m denotes the number of constraints, and C denotes the absolute value of the most negative constant defining the UCS. An interesting consequence of our research is the development of techniques for extracting refutations of integer infeasibility in UCSs that are linearly feasible. If the UCS is linearly feasible, then our algorithm creates a 2CNF formula. The UCS has an integer refutation (i.e., does not have a lattice point) if and only if the created 2CNF formula has a resolution refutation (i.e., is unsatisfiable). [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Polynomial time algorithms for optimal length tree-like refutations of linear infeasibility in UTVPI constraints.
- Author
-
Wojciechowski, Piotr, Subramani, K., and Williamson, Matthew
- Subjects
- *
POLYNOMIAL time algorithms , *WEIGHTED graphs , *OPERATIONS research , *DETERMINISTIC algorithms , *NUMBER systems , *ALGORITHMS , *POLYNOMIAL approximation , *LINEAR systems - Abstract
In this paper, we propose several algorithms for determining an optimal length tree-like refutation of linear feasibility in systems of Unit Two Variable Per Inequality (UTVPI) constraints. Given an infeasible UTVPI constraint system (UCS), a refutation certifies its infeasibility. The problem of finding refutations in a UCS finds applications in domains such as program verification and operations research. In general, there exist several types of refutations of feasibility in constraint systems. In this paper, we focus on a specific type of refutation called a tree-like refutation. Tree-like refutations are complete , in the sense that if a system of linear constraints is infeasible, then it must have a tree-like refutation. Associated with a refutation is its length, which equals the total number of constraints (accounting for the reuse of constraints) that are used to establish the infeasibility of the corresponding linear constraint system. Our goal in this paper is to find an optimal length tree-like refutation (OTLR) of an infeasible UCS. We describe two deterministic algorithms that find an OTLR of a UCS. If m is the number of constraints, n is the number of variables in the system, and k is the length of an OTLR, then our two algorithms run in O (n 3 ⋅ log k) time and O (m ⋅ n ⋅ k) time respectively. We also propose a true-biased, randomized algorithm for this problem. This algorithm runs in O (m ⋅ n ⋅ log n) time, and returns the length of an OTLR with probability (1 − 1 e). We extend our work to weighted UTVPI constraint systems (WUCS), where a positive weight is associated with each UTVPI constraint. The weight of a tree-like refutation is defined as the sum of the weights of the constraints used by the refutation. The problem of finding a minimum weight refutation in a WUCS is called the weighted optimal length tree-like refutation (WOTLR) problem and is known to be NP-hard. We propose a pseudo-polynomial time algorithm for the WOTLR problem and convert it into a fully polynomial time approximation scheme (FPTAS). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. A Polynomial Time Algorithm for Read-Once Certification of Linear Infeasibility in UTVPI Constraints.
- Author
-
Subramani, K. and Wojciechowki, Piotr
- Subjects
- *
NP-complete problems , *POLYNOMIAL time algorithms , *UNDIRECTED graphs , *OPERATIONS research , *LINEAR systems , *CERTIFICATION - Abstract
In this paper, we discuss the design and analysis of polynomial time algorithms for two problems associated with a linearly infeasible system of Unit Two Variable Per Inequality (UTVPI) constraints, viz., (a) the read-once refutation (ROR) problem, and (b) the literal-once refutation (LOR) problem. Recall that a UTVPI constraint is a linear relationship of the form: a i · x i + a j · x j ≤ b ij , where a i , a j ∈ { 0 , 1 , - 1 } . A conjunction of such constraints is called a UTVPI constraint system (UCS) and can be represented in matrix form as: A · x ≤ b . These constraints find applications in a host of domains including but not limited to operations research and program verification. For the linear system A · x ≤ b , a refutation is a collection of m variables y = [ y 1 , y 2 , ... , y m ] T ∈ R + m , such that y · A = 0 , y · b < 0 . Such a refutation is guaranteed to exist for any infeasible linear program, as per Farkas' lemma. The refutation is said to be read-once, if each y i ∈ { 0 , 1 } . Read-once refutations are incomplete in that their existence is not guaranteed for infeasible linear programs, in general. Indeed they are not complete, even for UCSs. Hence, the question of whether an arbitrary UCS has an ROR is both interesting and non-trivial. In this paper, we reduce this problem to the problem of computing a minimum weight perfect matching (MWPM) in an undirected graph. This transformation results in an algorithm that runs in in time polynomial in the size of the input UCS. Additionally, we design a polynomial time algorithm (also via a reduction to the MWPM problem) for a variant of read-once resolution called literal-once resolution. The advantage of reducing our problems to the MWPM problem is that we can leverage recent advances in algorithm design for the MWPM problem towards solving the ROR and LOR problems in UCSs. Finally, we show that another variant of read-once refutation problem called the non-literal read-once refutation (NLROR) problem is NP-complete in UCSs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Improved algorithms for optimal length resolution refutation in difference constraint systems.
- Author
-
Subramani, K., Williamson, Matthew, and Gu, Xiaofeng
- Subjects
- *
CONSTRAINT algorithms , *INTEGRAL domains , *PROOF theory , *REAL time scheduling (Computer science) , *OPERATIONS research - Abstract
This paper is concerned with the design and analysis of improved algorithms for determining the optimal length resolution refutation (OLRR) of a system of difference constraints over an integral domain. The problem of finding short explanations for unsatisfiable Difference Constraint Systems (DCS) finds applications in a number of design domains including program verification, proof theory, real-time scheduling, and operations research. These explanations have also been called 'certificates' and 'refutations' in the literature. This problem was first studied in Subramani (J Autom Reason 43(2):121-137, ), wherein the first polynomial time algorithm was proposed. In this paper, we propose two new strongly polynomial algorithms which improve on the existing time bound. Our first algorithm, which we call the edge progression approach, runs in O( n · k + m · n · k) time, while our second algorithm, which we call the edge relaxation approach, runs in O( m · n · k) time, where m is the number of constraints in the DCS, n is the number of program variables, and k denotes the length of the shortest refutation. We conducted an extensive empirical analysis of the three OLRR algorithms discussed in this paper. Our experiments indicate that in the case of sparse graphs, the new algorithms discussed in this paper are superior to the algorithm in Subramani (J Autom Reason 43(2):121-137, ). Likewise, in the case of dense graphs, the approach in Subramani (J Autom Reason 43(2):121-137, ) is superior to the algorithms described in this paper. One surprising observation is the superiority of the edge relaxation algorithm over the edge progression algorithm in all cases, although both algorithms have the same asymptotic time complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
8. Two-level heaps: a new priority queue structure with applications to the single source shortest path problem.
- Author
-
Subramani, K. and Madduri, Kamesh
- Subjects
- *
OPERATIONS research , *POLYNOMIALS , *COMPUTER science , *ALGORITHMS , *CHARTS, diagrams, etc. - Abstract
The single source shortest paths problem with positive edge weights (SSSPP) is one of the more widely studied problems in operations research and theoretical computer science, on account of its wide applicability to practical situations. This problem was first solved in polynomial time by Dijkstra, who showed that by extracting vertices with the smallest distance from the source and relaxing its outgoing edges, the shortest path to each vertex is obtained. Variations of this general theme have led to a number of algorithms which work well in practice. At the heart of a Dijkstra implementation is the technique used to implement a priority queue. It is well known that using Dijkstra's approach requires Ω( n log n) steps on a graph having n vertices, since it essentially sorts vertices based on their distances from the source. Accordingly, the fastest implementation of Dijkstra's algorithm on a graph with n vertices and m edges should take Ω( m + n · log n) time, and consequently, the Dijkstra procedure for SSSPP using Fibonacci Heaps is optimal in the comparison-based model. In this paper, we introduce a new data structure to implement priority queues called two-level heap (TLH) and a new variant of Dijkstra's algorithm called Phased Dijkstra. We contrast the performance of Dijkstra's algorithm (both the simple and the phased variants) using a number of data structures to implement the priority queue and empirically establish that TLH are far superior to Fibonacci heaps on every graph family considered. It is to be noted that our profiling includes both sparse and dense graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
9. Space–time tradeoffs in negative cycle detection – An empirical analysis of the Stressing Algorithm
- Author
-
Subramani, K., Tauras, C., and Madduri, K.
- Subjects
- *
COMPUTER science , *EMPIRICAL research , *ALGORITHMS , *COST analysis , *OPERATIONS research - Abstract
Abstract: This paper discusses space–time tradeoffs associated with algorithms for the problem of detecting negative cost cycles in networks (NCCD). NCCD is one of the more ubiquitous problems in computer science and operations research, with applications ranging from program verification and real-time scheduling to image segmentation and shortest path identification. Typical algorithmic analysis, whether theoretical or empirical, focuses almost exclusively on the running time of the algorithm. However, there exist applications in which space is just as important a parameter as time is. This is especially true when the problem instances are very large, as is the case in program verification. Consequently, an algorithm that minimizes running time while ignoring the space overhead may be impractical. In this paper, we analyze a number of the more common algorithms for NCCD from the perspectives of both time and space, with a view towards providing a space–time tradeoff for the practitioner. All the algorithms discussed in this paper (with the exception of Network Simplex) run in time on a network with m arcs and n vertices; however, their space requirements range from (Stressing Algorithm) to (all the Bellman–Ford and Network Simplex variants). Our empirical results demonstrate that in the cases where space is paramount, the Stressing Algorithm is a useful alternative to the Bellman–Ford variants. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
10. Computing inversion pair cardinality through partition-based sorting.
- Author
-
Subramani, K.
- Subjects
- *
ALGORITHMS , *LITERATURE , *OPERATIONS research , *SYSTEMS theory , *INDUSTRIAL engineering , *ALGEBRA - Abstract
In this paper, we introduce a new randomized, partition-based algorithm for the problem of computing the number of inversion pairs in an unsorted array of n numbers. The algorithm runs in expected time O( n · log n) and uses O( n) extra space. The expected time analysis of the new algorithm is different from the analyses existing in the literature, in that it explicitly uses inversion pairs. The problem of determining the inversion pair cardinality of an array finds applications in a number of design domains, including but not limited to real-time scheduling and operations research. At the heart of our algorithm is a new inversion pair conserving partition procedure that is different from existing partition procedures such as Hoare-partition and Lomuto-partition. Although the algorithm is not fully adaptive, we believe that it is the first step towards the design of an adaptive, partition-based sorting algorithm whose running time is proportional to the number of inversion pairs in the input. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.