1. A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix
- Author
-
Daniel Dadush, Sophie Huiberts, László A. Végh, Bento Natura, Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands, Makarychev, Konstantin, Makarychev, Yury, Tulsiani, Madhur, Kamath, Gautam, and Chuzhoy, Julia
- Subjects
FOS: Computer and information sciences ,Rank (linear algebra) ,Linear programming ,General Mathematics ,0211 other engineering and technologies ,0102 computer and information sciences ,02 engineering and technology ,Layered least squares methods ,01 natural sciences ,Circuits ,Computer Science - Data Structures and Algorithms ,FOS: Mathematics ,Data Structures and Algorithms (cs.DS) ,QA Mathematics ,Condition number ,Invariant (mathematics) ,Mathematics - Optimization and Control ,Mathematics ,021103 operations research ,Linear system ,Circuit imbalances ,Function (mathematics) ,Linear matroids ,Chi bar ,Exact algorithm ,010201 computation theory & mathematics ,Optimization and Control (math.OC) ,Interior point methods ,Algorithm ,Interior point method ,Software - Abstract
Following the breakthrough work of Tardos (Oper Res 34:250–256, 1986) in the bit-complexity model, Vavasis and Ye (Math Program 74(1):79–120, 1996) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) $$\max \, c^\top x,\, Ax = b,\, x \ge 0,\, A \in \mathbb {R}^{m \times n}$$ max c ⊤ x , A x = b , x ≥ 0 , A ∈ R m × n , Vavasis and Ye developed a primal-dual interior point method using a ‘layered least squares’ (LLS) step, and showed that $$O(n^{3.5} \log (\bar{\chi }_A+n))$$ O ( n 3.5 log ( χ ¯ A + n ) ) iterations suffice to solve (LP) exactly, where $$\bar{\chi }_A$$ χ ¯ A is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J Optim 13(4):1054–1079, 2003), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure $$\bar{\chi }^*_A$$ χ ¯ A ∗ , defined as the minimum $$\bar{\chi }_{AD}$$ χ ¯ AD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an $$O(m^2 n^2 + n^3)$$ O ( m 2 n 2 + n 3 ) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying $$\bar{\chi }_{AD} \le n(\bar{\chi }_A^*)^3$$ χ ¯ AD ≤ n ( χ ¯ A ∗ ) 3 . This algorithm also allows us to approximate the value of $$\bar{\chi }_A$$ χ ¯ A up to a factor $$n (\bar{\chi }_A^*)^2$$ n ( χ ¯ A ∗ ) 2 . This result is in surprising contrast to that of Tunçel (Math Program 86(1):219–223, 1999), who showed NP-hardness for approximating $$\bar{\chi }_A$$ χ ¯ A to within $$2^{\textrm{poly}(\textrm{rank}(A))}$$ 2 poly ( rank ( A ) ) . The key insight for our algorithm is to work with ratios $$g_i/g_j$$ g i / g j of circuits of A—i.e., minimal linear dependencies $$Ag=0$$ A g = 0 —which allow us to approximate the value of $$\bar{\chi }_A^*$$ χ ¯ A ∗ by a maximum geometric mean cycle computation in what we call the ‘circuit ratio digraph’ of A. While this resolves Monteiro and Tsuchiya’s question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved $$O(n^{2.5} \log (n)\log (\bar{\chi }^*_A+n))$$ O ( n 2.5 log ( n ) log ( χ ¯ A ∗ + n ) ) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor $$n/\log n$$ n / log n improvement on the iteration complexity bound of the original Vavasis–Ye algorithm.
- Published
- 2023
- Full Text
- View/download PDF