1,422 results on '"Average-case complexity"'
Search Results
2. Regular Expressions Avoiding Absorbing Patterns and the Significance of Uniform Distribution.
- Author
-
Broda, Sabine, Machiavelo, António, Moreira, Nelma, and Reis, Rogério
- Subjects
- *
ROBOTS , *COMBINATORICS , *GRAMMAR , *ALGORITHMS , *DENSITY - Abstract
Although regular expressions do not correspond univocally to regular languages, it is still worthwhile to study their properties and algorithms. For the average case analysis one often relies on the uniform random generation using a specific grammar for regular expressions, that can represent regular languages with more or less redundancy. Generators that are uniform on the set of expressions are not necessarily uniform on the set of regular languages. Nevertheless, it is not straightforward that asymptotic estimates obtained by considering the whole set of regular expressions are different from those obtained using a more refined set that avoids some large class of equivalent expressions. In this paper we study a set of expressions that avoid a given absorbing pattern. It is shown that, although this set is significantly smaller than the standard one, the asymptotic average estimates for the size of the Glushkov automaton for these expressions does not differ from the standard case. Furthermore, for this set the asymptotic density of 휀-accepting expressions is also the same as for the set of all standard regular expressions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. NON-BLACK-BOX WORST-CASE TO AVERAGE-CASE REDUCTIONS WITHIN NP.
- Author
-
SHUICHI HIRAHARA
- Subjects
- *
KOLMOGOROV complexity , *CIRCUIT complexity , *HARDNESS , *HEURISTIC algorithms - Abstract
There are significant obstacles to establishing an equivalence between the worst-case and average-case hardness of NP. Several results suggest that black-box worst-case to averagecase reductions are not likely to be used for reducing any worst-case problem outside to a distributional N P problem. This paper overcomes the barrier. We present the first non-blackbox worst-case to average-case reduction from a problem conjectured to be outside Np to a distributional N P problem. Specifically, we consider the minimum time-bounded Kolmogorov complexity problem (MINKT) and prove that there exists a zero-error randomized polynomial-time algorithm approximating the minimum time-bounded Kolmogorov complexity k within an additive error Õ (√k) if its average-case version admits an errorless heuristic polynomial-time algorithm. We observe that the approximation version of MINKT is Random 3SAT-hard, and more generally it is harder than avoiding any polynomial-time computable hitting set generator that extends its seed of length n by ??? (√n), which provides strong evidence that the approximation problem is outside coNP / poly and thus our reductions are non-black-box. Our reduction can be derandomized at the cost of the quality of the approximation. We also show that, given a truth table of size 2n, approximating the minimum circuit size within a factor of 2(1 - ε)n is in BPP for some constant ε > 0 iff its averagecase version is easy. Our results can be seen as a new approach for excluding Heuristica. In particular, provingNP -hardness of the approximation versions of MINKT or the minimum circuit size problem is sufficient for establishing an equivalence between the worst-case and average-case hardness of NP. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. Algorithms and Lower Bounds for Comparator Circuits from Shrinkage.
- Author
-
Cavalar, Bruno P. and Lu, Zhenjian
- Subjects
- *
COMPARATOR circuits , *CIRCUIT complexity , *COMPUTABLE functions , *ALGORITHMS - Abstract
In this paper, we initiate the study of average-case complexity and circuit analysis algorithms for comparator circuits. Departing from previous approaches, we exploit the technique of shrinkage under random restrictions to obtain a variety of new results for this model. Among them, we show Average-case Lower Bounds For every k = k (n) with k ⩾ log n , there exists a polynomial-time computable function f k on n bits such that, for every comparator circuit C with at most n 1.5 / O k · log n gates, we have Pr x ∈ 0 , 1 n C (x) = f k (x) ⩽ 1 2 + 1 2 Ω (k) . This average-case lower bound matches the worst-case lower bound of Gál and Robere by letting k = O log n . # SAT Algorithms There is an algorithm that counts the number of satisfying assignments of a given comparator circuit with at most n 1.5 / O k · log n gates, in time 2 n - k · poly (n) , for any k ⩽ n / 4 . The running time is non-trivial (i.e., 2 n / n ω (1) ) when k = ω (log n) . Pseudorandom Generators and MCSP Lower Bounds There is a pseudorandom generator of seed length s 2 / 3 + o (1) that fools comparator circuits with s gates. Also, using this PRG, we obtain an n 1.5 - o (1) lower bound for MCSP against comparator circuits. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Is it possible to improve Yao’s XOR lemma using reductions that exploit the efficiency of their oracle?
- Author
-
Shaltiel, Ronen
- Abstract
Yao’s XOR lemma states that for every function f : { 0 , 1 } k → { 0 , 1 } , if f has hardness 2/3 for P/poly (meaning that for every circuit C in P/poly, Pr [ C (X) = f (X) ] ≤ 2 / 3 on a uniform input X), then the task of computing f (X 1) ⊕ … ⊕ f (X t) for sufficiently large t has hardness 1 2 + ϵ for P/poly. Known proofs of this lemma cannot achieve ϵ = 1 k ω (1) , and even for ϵ = 1 k , we do not know how to replace P/poly by AC
0 [parity] (the class of constant depth circuits with the gates {and, or, not, parity} of unbounded fan-in). Grinberg, Shaltiel and Viola (FOCS 2018) (building on a sequence of earlier works) showed that these limitations cannot be circumvented by black-box reductions. Namely, by reductions Red (·) that given oracle access to a function D that violates the conclusion of Yao’s XOR lemma, implement a circuit that violates the assumption of Yao’s XOR lemma. There are a few known reductions in the related literature on worst-case to average-case reductions that are non-black-box. Specifically, the reductions of Gutfreund, Shaltiel and Ta-Shma (Computational Complexity 2007) and Hirahara (FOCS 2018)) are “class reductions” that are only guaranteed to succeed when given oracle access to an oracle D from some efficient class of algorithms. These works seem to circumvent some black-box impossibility results. In this paper, we extend the previous limitations of Grinberg, Shaltiel and Viola to several types of class reductions, giving evidence that class reductions cannot yield the desired improvements in Yao’s XOR lemma. To the best of our knowledge, this is the first limitation on reductions for hardness amplification that applies to class reductions. Our technique imitates the previous lower bounds for black-box reductions, replacing the inefficient oracle used in that proof, with an efficient one that is based on limited independence, and developing tools to deal with the technical difficulties that arise following this replacement. [ABSTRACT FROM AUTHOR]- Published
- 2023
- Full Text
- View/download PDF
6. Average-case complexity of the Whitehead problem for free groups.
- Author
-
Shpilrain, Vladimir
- Subjects
FREE groups ,PROBLEM solving - Abstract
The worst-case complexity of group-theoretic algorithms has been studied for a long time. Generic-case complexity, or complexity on random inputs, was introduced and studied relatively recently. In this paper, we address the average-case complexity (i.e., the expected runtime) of algorithms that solve a well-known problem, the Whitehead problem in a free group, which is: given two elements of a free group, find out whether there is an automorphism that takes one element to the other. First we address a special case of the Whitehead problem, namely deciding if a given element of a free group is part of a free basis. We show that there is an algorithm that, on a cyclically reduced input word, solves this problem and has constant (with respect to the length of the input) average-case complexity. For the general Whitehead problem, we show that the classical Whitehead algorithm has linear average-case complexity if the rank of the free group is 2. We argue that the same should be true in a free group of any rank but point out obstacles to establishing this general result. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Generalization of the Subset Sum Problem and Cubic Forms.
- Author
-
Seliverstov, A. V.
- Subjects
- *
LINEAR equations , *HEURISTIC algorithms , *GENERALIZATION , *PROBLEM solving , *LINEAR systems , *COMPUTATIONAL complexity , *INTEGER programming - Abstract
A new algorithm is proposed for deciding whether a system of linear equations has a binary solution over a field of zero characteristic. The algorithm is efficient under a certain constraint on the system of equations. This is a special case of an integer programming problem. In the extended version of the subset sum problem, the weight can be positive or negative. The problem under consideration is equivalent to the analysis of solution existence for several instances of this problem simultaneously. New sufficient conditions are found under which the computational complexity of almost all instances of this problem is polynomial. In fact, the algorithm checks the existence of a cubic hypersurface that passes through each vertex of the unit cube, but does not intersect a given affine subspace. Several heuristic algorithms for solving this problem have been known previously. However, the new methods expand the solution possibilities. Although only the solution existence problem is considered in detail, binary search allows one to find a solution, if any. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. On the Implementation of Monotone Boolean Functions by Memoryless Programs.
- Author
-
Chashkin, A. V.
- Abstract
The average-case complexity of computing monotone Boolean functions by straight-line programs without memory with a conditional stop in the basis of all Boolean functions of at most two variables is considered. For the set of all monotone Boolean functions of variables, Shannon-type upper and lower bounds for the average-case complexity are established for . [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits
- Author
-
Kane, Daniel M and Williams, Ryan
- Subjects
Applied Mathematics ,Pure Mathematics ,Mathematical Sciences ,average-case complexity ,circuit complexity ,random restrictions ,threshold circuits ,Littlewood-Offord problems - Published
- 2016
10. Average-case complexity of the Euclidean algorithm with a fixed polynomial over a finite field.
- Author
-
Giménez, Nardo, Matera, Guillermo, Pérez, Mariana, and Privitelli, Melina
- Subjects
POLYNOMIALS ,EUCLIDEAN algorithm ,SYMMETRIC functions ,FINITE fields - Abstract
We analyse the behaviour of the Euclidean algorithm applied to pairs (g,f) of univariate nonconstant polynomials over a finite field $\mathbb{F}_{q}$ of q elements when the highest degree polynomial g is fixed. Considering all the elements f of fixed degree, we establish asymptotically optimal bounds in terms of q for the number of elements f that are relatively prime with g and for the average degree of $\gcd(g,f)$. We also exhibit asymptotically optimal bounds for the average-case complexity of the Euclidean algorithm applied to pairs (g,f) as above. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. On the average-case complexity of Boolean functions under binomial distribution on their domains.
- Author
-
Chashkin, Aleksandr V.
- Subjects
- *
BINOMIAL distribution , *BOOLEAN functions , *DISTRIBUTION (Probability theory) - Abstract
Given a binomial probability distribution on the n-dimensional Boolean cube, the complexity of implementation of Boolean functions by straight line programs with conditional stop is considered. The order, as n → ∞, of the average-case complexity of almost all n-place Boolean functions is established. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Beating Treewidth for Average-Case Subgraph Isomorphism.
- Author
-
Rosenthal, Gregory
- Subjects
- *
CIRCUIT complexity , *ALGORITHMS , *ISOMORPHISM (Mathematics) , *GRAPH algorithms - Abstract
For any fixed graph G, the subgraph isomorphism problem asks whether an n-vertex input graph has a subgraph isomorphic to G. A well-known algorithm of Alon et al. (J ACM 42(4):844–856, 1995. https://doi.org/10.1145/210332.210337) efficiently reduces this to the "colored" version of the problem, denoted G - SUB , and then solves G - SUB in time O (n tw (G) + 1) where tw (G) is the treewidth of G. Marx (Theory Comput 6(1):85–112, 2010. https://doi.org/10.4086/toc.2010.v006a005) conjectured that G - SUB requires time Ω (n const · tw (G)) and, assuming the Exponential Time Hypothesis, proved a lower bound of Ω (n const · emb (G)) for a certain graph parameter emb (G) ≥ Ω (tw (G) / log tw (G)) . With respect to the size of AC 0 circuits solving G - SUB in the average case, Li et al. (SIAM J Comput 46(3):936–971, 2017. https://doi.org/10.1137/14099721X) proved (unconditional) upper and lower bounds of O (n 2 κ (G) + const) and Ω (n κ (G)) for a different graph parameter κ (G) ≥ Ω (tw (G) / log tw (G)) . Our contributions are as follows. First, we prove that emb (G) is O (κ (G)) for all graphs G. Next, we show that κ (G) can be asymptotically less than tw (G) ; for example, if G is a hypercube then κ (G) is Θ tw (G) / log tw (G) . This implies that the average-case complexity of G - SUB is n o (tw (G)) when G is a hypercube. Finally, we construct AC 0 circuits of size O (n κ (G) + const) that solve G - SUB in the average case, closing the gap between the upper and lower bounds of Li et al. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. An Average-Case Lower Bound Against
- Author
-
Chen, Ruiwen, Oliveira, Igor C., Santhanam, Rahul, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Bender, Michael A., editor, Farach-Colton, Martín, editor, and Mosteiro, Miguel A., editor
- Published
- 2018
- Full Text
- View/download PDF
14. Can PPAD Hardness be Based on Standard Cryptographic Assumptions?
- Author
-
Rosen, Alon, Segev, Gil, and Shahaf, Ido
- Abstract
We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness. Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink. Within the framework of black-box reductions, we prove the following results: (i) average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness. (ii) Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness. (iii) Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution. Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly exponential number of solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Factorization patterns on nonlinear families of univariate polynomials over a finite field.
- Author
-
Matera, Guillermo, Pérez, Mariana, and Privitelli, Melina
- Abstract
We estimate the number | A λ | of elements on a nonlinear family A of monic polynomials of F q [ T ] of degree r having factorization pattern λ : = 1 λ 1 2 λ 2 ... r λ r . We show that | A λ | = T (λ) q r - m + O (q r - m - 1 / 2) , where T (λ) is the proportion of elements of the symmetric group of r elements with cycle pattern λ and m is the codimension of A . We provide explicit upper bounds for the constants underlying the O -notation in terms of λ and A with "good" behavior. We also apply these results to analyze the average-case complexity of the classical factorization algorithm restricted to A , showing that it behaves as good as in the general case. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Improved Learning from Kolmogorov Complexity
- Author
-
Halley Goldberg and Valentine Kabanets, Goldberg, Halley, Kabanets, Valentine, Halley Goldberg and Valentine Kabanets, Goldberg, Halley, and Kabanets, Valentine
- Abstract
Carmosino, Impagliazzo, Kabanets, and Kolokolova (CCC, 2016) showed that the existence of natural properties in the sense of Razborov and Rudich (JCSS, 1997) implies PAC learning algorithms in the sense of Valiant (Comm. ACM, 1984), for boolean functions in P/poly, under the uniform distribution and with membership queries. It is still an open problem to get from natural properties learning algorithms that do not rely on membership queries but rather use randomly drawn labeled examples. Natural properties may be understood as an average-case version of MCSP, the problem of deciding the minimum size of a circuit computing a given truth-table. Problems related to MCSP include those concerning time-bounded Kolmogorov complexity. MKTP, for example, asks for the KT-complexity of a given string. KT-complexity is a relaxation of circuit size, as it does away with the requirement that a short description of a string be interpreted as a boolean circuit. In this work, under assumptions of MKTP and the related problem MK^tP being easy on average, we get learning algorithms for boolean functions in P/poly that - work over any distribution D samplable by a family of polynomial-size circuits (given explicitly in the case of MKTP), - only use randomly drawn labeled examples from D, and - are agnostic (do not require the target function to belong to the hypothesis class). Our results build upon the recent work of Hirahara and Nanashima (FOCS, 2021) who showed similar learning consequences but under a stronger assumption that NP is easy on average.
- Published
- 2023
- Full Text
- View/download PDF
17. Graph Colouring Is Hard on Average for Polynomial Calculus and Nullstellensatz
- Author
-
Conneryd, Jonas, de Rezende, Susanna F., Nordström, Jakob, Pang, Shuo, Risse, Kilian, Conneryd, Jonas, de Rezende, Susanna F., Nordström, Jakob, Pang, Shuo, and Risse, Kilian
- Abstract
We prove that polynomial calculus (and hence also Nullstellensatz) over any field requires linear degree to refute that sparse random regular graphs, as well as sparse Erdős-Rényi random graphs, are 3-colourable. Using the known relation between size and degree for polynomial calculus proofs, this implies strongly exponential lower bounds on proof size
- Published
- 2023
18. Average-Case Optimal Approximate Circular String Matching
- Author
-
Barton, Carl, Iliopoulos, Costas S., Pissis, Solon P., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Dediu, Adrian-Horia, editor, Formenti, Enrico, editor, Martín-Vide, Carlos, editor, and Truthe, Bianca, editor
- Published
- 2015
- Full Text
- View/download PDF
19. Condition Numbers and Iterative Algorithms
- Author
-
Bürgisser, Peter, Cucker, Felipe, Chenciner, Alain, Editor-in-chief, Coates, John, Editor-in-chief, Berger, Marcel, Series editor, Hitchin, Nigel J., Series editor, Kupiainen, Antti, Series editor, Lebeau, Gilles, Series editor, Ratner, Marina, Series editor, Serre, Denis, Series editor, Sloane, Neil J. A., Series editor, Vershik, Anatoly M., Series editor, Bürgisser, Peter, and Cucker, Felipe
- Published
- 2013
- Full Text
- View/download PDF
20. Learning Versus Pseudorandom Generators in Constant Parallel Time
- Author
-
Hirahara, Shuichi and Nanashima, Mikito
- Subjects
Theory of computation → Cryptographic primitives ,average-case complexity ,polynomial-stretch pseudorandom generators in NC⁰ ,fixed-parameter tractability ,PAC learning ,Theory of computation → Boolean function learning ,Parallel cryptography - Abstract
A polynomial-stretch pseudorandom generator (PPRG) in NC⁰ (i.e., constant parallel time) is one of the most important cryptographic primitives, especially for constructing highly efficient cryptography and indistinguishability obfuscation. The celebrated work (Applebaum, Ishai, and Kushilevitz, SIAM Journal on Computing, 2006) on randomized encodings yields the characterization of sublinear-stretch pseudorandom generators in NC⁰ by the existence of logspace-computable one-way functions, but characterizing PPRGs in NC⁰ seems out of reach at present. Therefore, it is natural to ask which sort of hardness notion is essential for constructing PPRGs in NC⁰. Particularly, to the best of our knowledge, all the previously known candidates for PPRGs in NC⁰ follow only one framework based on Goldreich’s one-way function. In this paper, we present a new learning-theoretic characterization for PPRGs in NC⁰ and related classes. Specifically, we consider the average-case hardness of learning for well-studied classes in parameterized settings, where the number of samples is restricted to fixed-parameter tractable (FPT), and show that the following are equivalent: - The existence of (a collection of) PPRGs in NC⁰. - The average-case hardness of learning sparse 𝔽₂-polynomials on a sparse example distribution and an NC⁰-samplable target distribution (i.e., a distribution on target functions). - The average-case hardness of learning Fourier-sparse functions on a sparse example distribution and an NC⁰-samplable target distribution. - The average-case hardness of learning constant-depth parity decision trees on a sparse example distribution and an NC⁰-samplable target distribution. Furthermore, we characterize a (single) PPRG in parity-NC⁰ by the average-case hardness of learning constant-degree 𝔽₂-polynomials on a uniform example distribution with FPT samples. Based on our results, we propose new candidates for PPRGs in NC⁰ and related classes under a hardness assumption on a natural learning problem. An important property of PPRGs in NC⁰ constructed in our framework is that the output bits are computed by various predicates; thus, it seems to resist an attack that depends on a specific property of one fixed predicate. Conceptually, the main contribution of this study is to formalize a theory of FPT dualization of concept classes, which yields a meta-theorem for the first result. For the second result on PPRGs in parity-NC⁰, we use a different technique of pseudorandom 𝔽₂-polynomials., LIPIcs, Vol. 251, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023), pages 70:1-70:18
- Published
- 2023
- Full Text
- View/download PDF
21. Nondeterministic Interactive Refutations for Nearest Boolean Vector
- Author
-
Bogdanov, Andrej and Rosen, Alon
- Subjects
lattice smoothing ,Theory of computation → Interactive proof systems ,Theory of computation → Cryptographic primitives ,average-case complexity ,nondeterministic refutation ,Theory of computation → Problems, reductions and completeness ,statistical zero-knowledge ,complexity of statistical inference ,Sherrington-Kirkpatrick Hamiltonian - Abstract
Most n-dimensional subspaces 𝒜 of ℝ^m are Ω(√m)-far from the Boolean cube {-1, 1}^m when n < cm for some constant c > 0. How hard is it to certify that the Nearest Boolean Vector (NBV) is at least γ √m far from a given random 𝒜? Certifying NBV instances is relevant to the computational complexity of approximating the Sherrington-Kirkpatrick Hamiltonian, i.e. maximizing x^TAx over the Boolean cube for a matrix A sampled from the Gaussian Orthogonal Ensemble. The connection was discovered by Mohanty, Raghavendra, and Xu (STOC 2020). Improving on their work, Ghosh, Jeronimo, Jones, Potechin, and Rajendran (FOCS 2020) showed that certification is not possible in the sum-of-squares framework when m ≪ n^1.5, even with distance γ = 0. We present a non-deterministic interactive certification algorithm for NBV when m ≫ n log n and γ ≪ 1/mn^1.5. The algorithm is obtained by adapting a public-key encryption scheme of Ajtai and Dwork., LIPIcs, Vol. 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023), pages 28:1-28:14
- Published
- 2023
- Full Text
- View/download PDF
22. Ellipsoid Fitting up to a Constant
- Author
-
Hsieh, Jun-Ting, Kothari, Pravesh K., Potechin, Aaron, and Xu, Jeff
- Subjects
FOS: Computer and information sciences ,Computer Science - Computational Complexity ,average-case complexity ,Probability (math.PR) ,Computer Science - Data Structures and Algorithms ,FOS: Mathematics ,Data Structures and Algorithms (cs.DS) ,Semidefinite programming ,Computational Complexity (cs.CC) ,random matrices ,Mathematics - Probability ,Theory of computation → Semidefinite programming - Abstract
In [Saunderson, 2011; Saunderson et al., 2013], Saunderson, Parrilo, and Willsky asked the following elegant geometric question: what is the largest m = m(d) such that there is an ellipsoid in ℝ^d that passes through v_1, v_2, …, v_m with high probability when the v_is are chosen independently from the standard Gaussian distribution N(0,I_d)? The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^⊤ X v_i = 1 for every 1 ⩽ i ⩽ m - a natural example of a random semidefinite program. SPW conjectured that m = (1-o(1)) d²/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein [Potechin et al., 2022] and Kane and Diakonikolas [Kane and Diakonikolas, 2022] proved that m ≳ d²/log^O(1) d via a certain natural, explicit construction. In this work, we give a substantially tighter analysis of their construction to prove that m ≳ d²/C for an absolute constant C > 0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas [Barak et al., 2019; Bafna et al., 2022]. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension., LIPIcs, Vol. 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023), pages 78:1-78:20
- Published
- 2023
- Full Text
- View/download PDF
23. Improved Learning from Kolmogorov Complexity
- Author
-
Goldberg, Halley and Kabanets, Valentine
- Subjects
average-case complexity ,learning ,meta-complexity ,Kolmogorov complexity ,Theory of computation → Computational complexity and cryptography - Abstract
Carmosino, Impagliazzo, Kabanets, and Kolokolova (CCC, 2016) showed that the existence of natural properties in the sense of Razborov and Rudich (JCSS, 1997) implies PAC learning algorithms in the sense of Valiant (Comm. ACM, 1984), for boolean functions in P/poly, under the uniform distribution and with membership queries. It is still an open problem to get from natural properties learning algorithms that do not rely on membership queries but rather use randomly drawn labeled examples. Natural properties may be understood as an average-case version of MCSP, the problem of deciding the minimum size of a circuit computing a given truth-table. Problems related to MCSP include those concerning time-bounded Kolmogorov complexity. MKTP, for example, asks for the KT-complexity of a given string. KT-complexity is a relaxation of circuit size, as it does away with the requirement that a short description of a string be interpreted as a boolean circuit. In this work, under assumptions of MKTP and the related problem MK^tP being easy on average, we get learning algorithms for boolean functions in P/poly that - work over any distribution D samplable by a family of polynomial-size circuits (given explicitly in the case of MKTP), - only use randomly drawn labeled examples from D, and - are agnostic (do not require the target function to belong to the hypothesis class). Our results build upon the recent work of Hirahara and Nanashima (FOCS, 2021) who showed similar learning consequences but under a stronger assumption that NP is easy on average., LIPIcs, Vol. 264, 38th Computational Complexity Conference (CCC 2023), pages 12:1-12:29
- Published
- 2023
- Full Text
- View/download PDF
24. Average Case Complexity, Revisited
- Author
-
Goldreich, Oded, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, and Goldreich, Oded, editor
- Published
- 2011
- Full Text
- View/download PDF
25. Notes on Levin’s Theory of Average-Case Complexity
- Author
-
Goldreich, Oded, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, and Goldreich, Oded, editor
- Published
- 2011
- Full Text
- View/download PDF
26. On the Average-Case Complexity of Property Testing
- Author
-
Goldreich, Oded, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, and Goldreich, Oded, editor
- Published
- 2011
- Full Text
- View/download PDF
27. A PCP Characterization of AM
- Author
-
Drucker, Andrew, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Sudan, Madhu, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Vardi, Moshe Y., Series editor, Weikum, Gerhard, Series editor, Aceto, Luca, editor, Henzinger, Monika, editor, and Sgall, Jiří, editor
- Published
- 2011
- Full Text
- View/download PDF
28. Relativized Worlds without Worst-Case to Average-Case Reductions for NP
- Author
-
Watson, Thomas, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Serna, Maria, editor, Shaltiel, Ronen, editor, Jansen, Klaus, editor, and Rolim, José, editor
- Published
- 2010
- Full Text
- View/download PDF
29. On the average-case complexity of Shellsort.
- Author
-
Vitányi, Paul
- Subjects
ARITHMETIC mean ,KOLMOGOROV complexity ,INFORMATION theory ,APPLIED mathematics ,ARITHMETIC - Abstract
We prove a lower bound expressed in the increment sequence on the average-case complexity of the number of inversions of Shellsort. This lower bound is sharp in every case where it could be checked. A special case of this lower bound yields the general Jiang-Li-Vitányi lower bound. We obtain new results, for example, determining the average- case complexity precisely in the Yao-Janson-Knuth 3-pass case. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Average-Case Complexity of Partial Boolean Functions
- Author
-
Chashkin, Alexander, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Albrecht, Andreas, editor, and Steinhöfel, Kathleen, editor
- Published
- 2003
- Full Text
- View/download PDF
31. Symmetry of Information from Meta-Complexity
- Author
-
Shuichi Hirahara, Hirahara, Shuichi, Shuichi Hirahara, and Hirahara, Shuichi
- Abstract
Symmetry of information for time-bounded Kolmogorov complexity is a hypothetical inequality that relates time-bounded Kolmogorov complexity and its conditional analogue. In 1992, Longpré and Watanabe showed that symmetry of information holds if NP is easy in the worst case, which has been the state of the art over the last three decades. In this paper, we significantly improve this result by showing that symmetry of information holds under the weaker assumption that NP is easy on average. In fact, our proof techniques are applicable to any resource-bounded Kolmogorov complexity and enable proving symmetry of information from an efficient algorithm that computes resource-bounded Kolmogorov complexity. We demonstrate the significance of our proof techniques by presenting two applications. First, using that symmetry of information does not hold for Levin’s Kt-complexity, we prove that randomized Kt-complexity cannot be computed in time 2^o(n) on inputs of length n, which improves the previous quasi-polynomial lower bound of Oliveira (ICALP 2019). Our proof implements Kolmogorov’s insightful approach to the P versus NP problem in the case of randomized Kt-complexity. Second, we consider the question of excluding Heuristica, i.e., a world in which NP is easy on average but NP ≠ P, from Impagliazzo’s five worlds: Using symmetry of information, we prove that Heuristica is excluded if the problem of approximating time-bounded conditional Kolmogorov complexity K^t(x∣y) up to some additive error is NP-hard for t ≫ |y|. We complement this result by proving NP-hardness of approximating sublinear-time-bounded conditional Kolmogorov complexity up to a multiplicative factor of |x|^{1/(log log |x|)^O(1)} for t ≪ |y|. Our NP-hardness proof presents a new connection between sublinear-time-bounded conditional Kolmogorov complexity and a secret sharing scheme.
- Published
- 2022
- Full Text
- View/download PDF
32. Excluding PH Pessiland
- Author
-
Shuichi Hirahara and Rahul Santhanam, Hirahara, Shuichi, Santhanam, Rahul, Shuichi Hirahara and Rahul Santhanam, Hirahara, Shuichi, and Santhanam, Rahul
- Abstract
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string. In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators.
- Published
- 2022
- Full Text
- View/download PDF
33. Errorless Versus Error-Prone Average-Case Complexity
- Author
-
Shuichi Hirahara and Rahul Santhanam, Hirahara, Shuichi, Santhanam, Rahul, Shuichi Hirahara and Rahul Santhanam, Hirahara, Shuichi, and Santhanam, Rahul
- Abstract
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions. First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH. Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC¹ and for UP ∩ coUP against P. Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators.
- Published
- 2022
- Full Text
- View/download PDF
34. Guest Column
- Author
-
Muthuramakrishnan Venkitasubramaniam and Rafael Pass
- Subjects
Average-case complexity ,Multidisciplinary ,Theoretical computer science ,business.industry ,Computer science ,Existential quantification ,Probabilistic logic ,020206 networking & telecommunications ,Cryptography ,0102 computer and information sciences ,02 engineering and technology ,Solver ,01 natural sciences ,010201 computation theory & mathematics ,Bounded function ,0202 electrical engineering, electronic engineering, information engineering ,business ,Time complexity ,TFNP - Abstract
We review a study of average-case complexity through the lens of interactive puzzles- interactive games between a computationally bounded Challenger and computationally-bounded Solver/Attacker. Most notably, we use this treatment to review a recent result showing that if NP is hard-on-the-average, then there exists a sampleable distribution over only true statements of an NP language, for which no probabilistic polynomial time algorithm can find witnesses. We also discuss connections to the problem of whether average-case hardness in NP implies averagecase hardness in TFNP, or the existence of cryptographic one-way functions.
- Published
- 2021
- Full Text
- View/download PDF
35. Probabilistic Kolmogorov complexity with applications to average-case complexity
- Author
-
Goldberg, Halley, Kabanets, Valentine, Lu, Zhenjian, and Oliveira, Igor C.
- Subjects
average-case complexity ,learning ,meta-complexity ,Kolmogorov complexity ,Q1 ,Theory of computation → Computational complexity and cryptography ,worst-case to average-case reductions ,QA76 - Abstract
Understanding the relationship between the worst-case and average-case complexities of NP and of other subclasses of PH is a long-standing problem in complexity theory. Over the last few years, much progress has been achieved in this front through the investigation of meta-complexity: the complexity of problems that refer to the complexity of the input string x (e.g., given a string x, estimate its time-bounded Kolmogorov complexity). In particular, [Shuichi Hirahara, 2021] employed techniques from meta-complexity to show that if DistNP ⊆ AvgP then UP ⊆ DTIME[2^{O(n/log n)}]. While this and related results [Shuichi Hirahara and Mikito Nanashima, 2021; Lijie Chen et al., 2022] offer exciting progress after a long gap, they do not survive in the setting of randomized computations: roughly speaking, "randomness" is the opposite of "structure", and upper bounding the amount of structure (time-bounded Kolmogorov complexity) of different objects is crucial in recent applications of meta-complexity. This limitation is significant, since randomized computations are ubiquitous in algorithm design and give rise to a more robust theory of average-case complexity [Russell Impagliazzo and Leonid A. Levin, 1990]. In this work, we develop a probabilistic theory of meta-complexity, by incorporating randomness into the notion of complexity of a string x. This is achieved through a new probabilistic variant of time-bounded Kolmogorov complexity that we call pK^t complexity. Informally, pK^t(x) measures the complexity of x when shared randomness is available to all parties involved in a computation. By porting key results from meta-complexity to the probabilistic domain of pK^t complexity and its variants, we are able to establish new connections between worst-case and average-case complexity in the important setting of probabilistic computations: - If DistNP ⊆ AvgBPP, then UP ⊆ RTIME[2^O(n/log n)]. - If DistΣ^P_2 ⊆ AvgBPP, then AM ⊆ BPTIME[2^O(n/log n)]. - In the fine-grained setting [Lijie Chen et al., 2022], we get UTIME[2^O(√{nlog n})] ⊆ RTIME[2^O(√{nlog n})] and AMTIME[2^O(√{nlog n})] ⊆ BPTIME[2^O(√{nlog n})] from stronger average-case assumptions. - If DistPH ⊆ AvgBPP, then PH ⊆ BPTIME[2^O(n/log n)]. Specifically, for any 𝓁 ≥ 0, if DistΣ_{𝓁+2}^P ⊆ AvgBPP then Σ_𝓁^{P} ⊆ BPTIME[2^O(n/log n)]. - Strengthening a result from [Shuichi Hirahara and Mikito Nanashima, 2021], we show that if DistNP ⊆ AvgBPP then polynomial size Boolean circuits can be agnostically PAC learned under any unknown 𝖯/poly-samplable distribution in polynomial time. In some cases, our framework allows us to significantly simplify existing proofs, or to extend results to the more challenging probabilistic setting with little to no extra effort., LIPIcs, Vol. 234, 37th Computational Complexity Conference (CCC 2022), pages 16:1-16:60
- Published
- 2022
36. On the computation of rational points of a hypersurface over a finite field.
- Author
-
Matera, Guillermo, Pérez, Mariana, and Privitelli, Melina
- Subjects
- *
RATIONAL points (Geometry) , *HYPERSURFACES , *FINITE fields , *ALGORITHMS , *DISTRIBUTION (Probability theory) - Abstract
We design and analyze an algorithm for computing rational points of hypersurfaces defined over a finite field based on searches on “vertical strips”, namely searches on parallel lines in a given direction. Our results show that, on average, less than two searches suffice to obtain a rational point. We also analyze the probability distribution of outputs, using the notion of Shannon entropy, and prove that the algorithm is somewhat close to any “ideal” equidistributed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
37. Bounds for the average-case complexity of monotone Boolean functions.
- Author
-
Chashkin, Aleksandr V.
- Subjects
- *
MONOTONE operators , *BOOLEAN functions , *FUNCTION algebras , *MATHEMATICAL variables , *OPERATOR theory - Abstract
We consider average-case complexity of computing monotone Boolean functions by straight-line programs with a conditional stop over the basis of all Boolean functions of at most two variables. For the set of all n-ary monotone Boolean functions new Shannon-type upper and lower bounds for the average-case complexity as n → ∞ are established. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
38. The Value of Help Bits in Randomized and Average-Case Complexity.
- Author
-
Beigi, Salman, Etesami, Omid, and Gohari, Amin
- Abstract
'Help bits" are some limited trusted information about an instance or instances of a computational problem that may reduce the computational complexity of solving that instance or instances. In this paper, we study the value of help bits in the settings of randomized and average-case complexity. If k instances of a decision problem can be efficiently solved using $${\ell < k}$$ help bits, then without access to help bits one can efficiently compute a k-bit vector that is not equal to the k-bit vector of solutions to the k instances. A decision problem with this property is called k-membership comparable. Amir, Beigel, and Gasarch (1990) show that for constant k, all k-membership comparable languages are in P/poly. We extend this result to the setting of randomized computation: We show that for k at most logarithmic, the decision problem is k-membership comparable if using $${\ell}$$ help bits, k instances of the problem can be efficiently solved with probability greater than $${2^{\ell-k}}$$ . The same conclusion holds if using less than $${k(1 - h(\alpha))}$$ help bits (where $${h(\cdot)}$$ is the binary entropy function), we can efficiently solve $${1-\alpha}$$ fraction of the instances correctly with non-vanishing probability. We note that when k is constant, k-membership comparability implies being in P/poly. Next we consider the setting of average-case complexity: Assume that we can solve k instances of a decision problem using some help bits whose entropy is less than k when the k instances are drawn independently from a particular distribution. Then we can efficiently solve an instance drawn from that distribution with probability better than 1/2. Finally, we show that in the case where k is super-logarithmic, assuming k-membership comparability of a decision problem, one cannot prove that the problem is in P/poly by a 'relativizing" proof technique. All previous known proofs in this area have been relativizing. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. On the Complexity of Routing Permutations on Trees by Arc-Disjoint Paths Extended Abstract
- Author
-
Barth, D., Corteel, S., Denise, A., Gardy, D., Valencia-Pabon, M., Gonnet, Gaston H., editor, and Viola, Alfredo, editor
- Published
- 2000
- Full Text
- View/download PDF
40. The complexity of sparse Hensel lifting and sparse polynomial factorization
- Author
-
Baris Tuncer and Michael Monagan
- Subjects
Average-case complexity ,Discrete mathematics ,Multivariate statistics ,Algebra and Number Theory ,Diophantine equation ,010102 general mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,Univariate ,010103 numerical & computational mathematics ,Symbolic computation ,01 natural sciences ,Image (mathematics) ,Computational Mathematics ,Factorization ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Computer Science::Symbolic Computation ,0101 mathematics ,Mathematics ,Interpolation - Abstract
The standard approach to factor a multivariate polynomial in Z [ x 1 , x 2 , … , x n ] is to factor a univariate image in Z [ x 1 ] then recover the multivariate factors from their univariate images using a process known as multivariate Hensel lifting. Wang's multivariate Hensel lifting recovers the variables one at a time. It is currently implemented in many computer algebra systems, including Maple, Magma and Singular. When the factors are sparse, Wang's approach can be exponential in the number of variables n. To address this, sparse Hensel lifting was introduced by Zippel and then improved by Kaltofen. Recently, Monagan and Tuncer introduced a new approach which uses sparse polynomial interpolation to solve the multivariate polynomial diophantine equations that arise inside Hensel lifting in random polynomial time. This approach is shown to be practical and faster than Zippel's and Kaltofen's algorithms and faster than Wang's algorithm for non-zero evaluation points. In this work we first present a complete description of the sparse interpolation used by Monagan and Tuncer and show that it runs in random polynomial time. Next we study what happens to the sparsity of multivariate polynomials when the variables are successively evaluated at numbers. We determine the expected number of remaining terms. We use this result to revisit and correct the complexity analysis of Zippel's original sparse interpolation. Next we present an average case complexity analysis of our approach. We have implemented our algorithm in Maple with some sub-algorithms implemented in C. We present some experimental data comparing our approach with Wang's method for both sparse and dense factors. The data shows that our method is always competitive with Wang's method and faster when Wang's method is exponential in n.
- Published
- 2020
- Full Text
- View/download PDF
41. Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions
- Author
-
Chen, Lijie, Hirahara, Shuichi, and Vafa, Neekon
- Subjects
Average-case complexity ,Theory of computation ��� Complexity classes ,worst-case to average-case reduction - Abstract
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH: - NTIME[n] cannot be solved in quasi-linear time on average if UP �� ��� DTIME[2^{O��(���n)}]. - �����TIME[n] cannot be solved in quasi-linear time on average if ��_kSAT cannot be solved in time 2^{O��(���n)} for some constant k. Previously, it was not known if even average-case hardness of �����SAT implies the average-case hardness of �����TIME[n]. - Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+��}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+��} for some constant �� > 0. Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest., LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 45:1-45:16
- Published
- 2022
- Full Text
- View/download PDF
42. Errorless Versus Error-Prone Average-Case Complexity
- Author
-
Hirahara, S, Santhanam, R, and Braverman, M
- Subjects
Theory of computation ��� Interactive proof systems ,average-case complexity ,instance checker ,Theory of computation ��� Pseudorandomness and derandomization ,pseudorandomness ,Theory of computation ��� Complexity classes - Abstract
We consider the question of whether errorless and error-prone notions of average-case hardness are equivalent, and make several contributions. First, we study this question in the context of hardness for NP, and connect it to the long-standing open question of whether there are instance checkers for NP. We show that there is an efficient non-uniform non-adaptive reduction from errorless to error-prone heuristics for NP if and only if there is an efficient non-uniform average-case non-adaptive instance-checker for NP. We also suggest an approach to proving equivalence of the two notions of average-case hardness for PH. Second, we show unconditionally that error-prone average-case hardness is equivalent to errorless average-case hardness for P against NC�� and for UP ��� coUP against P. Third, we apply our results about errorless and error-prone average-case hardness to get new equivalences between hitting set generators and pseudo-random generators., LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 84:1-84:23
- Published
- 2022
- Full Text
- View/download PDF
43. Excluding PH Pessiland
- Author
-
Hirahara, S, Santhanam, R, and Braverman, M
- Subjects
average-case complexity ,meta-complexity ,Theory of computation ��� Problems, reductions and completeness ,pseudorandomness ,Theory of computation ��� Complexity classes - Abstract
Heuristica and Pessiland are "worlds" of average-case complexity [Impagliazzo95] that are considered unlikely but that current techniques are unable to rule out. Recently, [Hirahara20] considered a PH (Polynomial Hierarchy) analogue of Heuristica, and showed that to rule it out, it would be sufficient to prove the NP-completeness of the problem GapMINKT^PH of estimating the PH-oracle time-bounded Kolmogorov complexity of a string. In this work, we analogously define "PH Pessiland" to be a world where PH is hard on average but PH-computable pseudo-random generators do not exist. We unconditionally rule out PH-Pessiland in both non-uniform and uniform settings, by showing that the distributional problem of computing PH-oracle time-bounded Kolmogorov complexity of a string over the uniform distribution is complete for an (error-prone) average-case analogue of PH. Moreover, we show the equivalence between error-prone average-case hardness of PH and the existence of PH-computable pseudorandom generators., LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 85:1-85:25
- Published
- 2022
- Full Text
- View/download PDF
44. Symmetry of Information from Meta-Complexity
- Author
-
Hirahara, Shuichi
- Subjects
average-case complexity ,Theory of computation → Pseudorandomness and derandomization ,hardness of approximation ,unconditional lower bound ,pseudorandomness ,resource-bounded Kolmogorov complexity ,Theory of computation → Complexity classes - Abstract
Symmetry of information for time-bounded Kolmogorov complexity is a hypothetical inequality that relates time-bounded Kolmogorov complexity and its conditional analogue. In 1992, Longpré and Watanabe showed that symmetry of information holds if NP is easy in the worst case, which has been the state of the art over the last three decades. In this paper, we significantly improve this result by showing that symmetry of information holds under the weaker assumption that NP is easy on average. In fact, our proof techniques are applicable to any resource-bounded Kolmogorov complexity and enable proving symmetry of information from an efficient algorithm that computes resource-bounded Kolmogorov complexity. We demonstrate the significance of our proof techniques by presenting two applications. First, using that symmetry of information does not hold for Levin’s Kt-complexity, we prove that randomized Kt-complexity cannot be computed in time 2^o(n) on inputs of length n, which improves the previous quasi-polynomial lower bound of Oliveira (ICALP 2019). Our proof implements Kolmogorov’s insightful approach to the P versus NP problem in the case of randomized Kt-complexity. Second, we consider the question of excluding Heuristica, i.e., a world in which NP is easy on average but NP ≠ P, from Impagliazzo’s five worlds: Using symmetry of information, we prove that Heuristica is excluded if the problem of approximating time-bounded conditional Kolmogorov complexity K^t(x∣y) up to some additive error is NP-hard for t ≫ |y|. We complement this result by proving NP-hardness of approximating sublinear-time-bounded conditional Kolmogorov complexity up to a multiplicative factor of |x|^{1/(log log |x|)^O(1)} for t ≪ |y|. Our NP-hardness proof presents a new connection between sublinear-time-bounded conditional Kolmogorov complexity and a secret sharing scheme., LIPIcs, Vol. 234, 37th Computational Complexity Conference (CCC 2022), pages 26:1-26:41
- Published
- 2022
- Full Text
- View/download PDF
45. Finding Errorless Pessiland in Error-Prone Heuristica
- Author
-
Hirahara, Shuichi and Nanashima, Mikito
- Subjects
average-case complexity ,learning ,meta-complexity ,oracle separation ,relativization barrier ,auxiliary-input cryptography ,Theory of computation → Computational complexity and cryptography - Abstract
Average-case complexity has two standard formulations, i.e., errorless complexity and error-prone complexity. In average-case complexity, a critical topic of research is to show the equivalence between these formulations, especially on the average-case complexity of NP. In this study, we present a relativization barrier for such an equivalence. Specifically, we construct an oracle relative to which NP is easy on average in the error-prone setting (i.e., DistNP ⊆ HeurP) but hard on average in the errorless setting even by 2^o(n/log n)-size circuits (i.e., DistNP ⊈ AvgSIZE[2^o(n/log n)]), which provides an answer to the open question posed by Impagliazzo (CCC 2011). Additionally, we show the following in the same relativized world: - Lower bound of meta-complexity: GapMINKT^𝒪 ∉ prSIZE^𝒪[2^o(n/log n)] and GapMCSP^𝒪 ∉ prSIZE^𝒪[2^(n^ε)] for some ε > 0. - Worst-case hardness of learning on uniform distributions: P/poly is not weakly PAC learnable with membership queries on the uniform distribution by nonuniform 2ⁿ/n^ω(1)-time algorithms. - Average-case hardness of distribution-free learning: P/poly is not weakly PAC learnable on average by nonuniform 2^o(n/log n)-time algorithms. - Weak cryptographic primitives: There exist a hitting set generator, an auxiliary-input one-way function, an auxiliary-input pseudorandom generator, and an auxiliary-input pseudorandom function against SIZE^𝒪[2^o(n/log n)]. This provides considerable insights into Pessiland (i.e., the world in which no one-way function exists, and NP is hard on average), such as the relativized separation of the error-prone average-case hardness of NP and auxiliary-input cryptography. At the core of our oracle construction is a new notion of random restriction with masks., LIPIcs, Vol. 234, 37th Computational Complexity Conference (CCC 2022), pages 25:1-25:28
- Published
- 2022
- Full Text
- View/download PDF
46. Bypassing the XOR Trick: Stronger Certificates for Hypergraph Clique Number
- Author
-
Guruswami, Venkatesan, Kothari, Pravesh K., and Manohar, Peter
- Subjects
FOS: Computer and information sciences ,Random matrix theory ,Planted clique ,Computer Science::Discrete Mathematics ,Average-case complexity ,Theory of computation → Design and analysis of algorithms ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,Spectral refutation - Abstract
Let H(k,n,p) be the distribution on k-uniform hypergraphs where every subset of [n] of size k is included as an hyperedge with probability p independently. In this work, we design and analyze a simple spectral algorithm that certifies a bound on the size of the largest clique, ω(H), in hypergraphs H ∼ H(k,n,p). For example, for any constant p, with high probability over the choice of the hypergraph, our spectral algorithm certifies a bound of Õ(√n) on the clique number in polynomial time. This matches, up to polylog(n) factors, the best known certificate for the clique number in random graphs, which is the special case of k = 2. Prior to our work, the best known refutation algorithms [Amin Coja-Oghlan et al., 2004; Sarah R. Allen et al., 2015] rely on a reduction to the problem of refuting random k-XOR via Feige’s XOR trick [Uriel Feige, 2002], and yield a polynomially worse bound of Õ(n^{3/4}) on the clique number when p = O(1). Our algorithm bypasses the XOR trick and relies instead on a natural generalization of the Lovász theta semidefinite programming relaxation for cliques in hypergraphs., LIPIcs, Vol. 245, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2022), pages 42:1-42:7
- Published
- 2022
- Full Text
- View/download PDF
47. Algorithms and Lower Bounds for Comparator Circuits from Shrinkage
- Author
-
Cavalar, Bruno P. and Lu, Zhenjian
- Subjects
FOS: Computer and information sciences ,average-case complexity ,Computer Science - Computational Complexity ,Theory of computation ��� Circuit complexity ,General Computer Science ,TK ,Applied Mathematics ,pseudorandom generators ,Computational Complexity (cs.CC) ,satisfiability algorithms ,comparator circuits ,QA76 ,Computer Science Applications - Abstract
Comparator circuits are a natural circuit model for studying bounded fan-out computation whose power sits between nondeterministic branching programs and general circuits. Despite having been studied for nearly three decades, the first superlinear lower bound against comparator circuits was proved only recently by G��l and Robere (ITCS 2020), who established a ��((n/log n)^{1.5}) lower bound on the size of comparator circuits computing an explicit function of n bits. In this paper, we initiate the study of average-case complexity and circuit analysis algorithms for comparator circuits. Departing from previous approaches, we exploit the technique of shrinkage under random restrictions to obtain a variety of new results for this model. Among them, we show - Average-case Lower Bounds. For every k = k(n) with k ��� log n, there exists a polynomial-time computable function f_k on n bits such that, for every comparator circuit C with at most n^{1.5}/O(k��� ���{log n}) gates, we have Pr_{x ��� {0,1}���} [C(x) = f_k(x)] ��� 1/2 + 1/{2^{��(k)}}. This average-case lower bound matches the worst-case lower bound of G��l and Robere by letting k = O(log n). - #SAT Algorithms. There is an algorithm that counts the number of satisfying assignments of a given comparator circuit with at most n^{1.5}/O (k��� ���{log n}) gates, in time 2^{n-k} �� poly(n), for any k ��� n/4. The running time is non-trivial (i.e., 2���/n^{��(1)}) when k = ��(log n). - Pseudorandom Generators and MCSP Lower Bounds. There is a pseudorandom generator of seed length s^{2/3+o(1)} that fools comparator circuits with s gates. Also, using this PRG, we obtain an n^{1.5-o(1)} lower bound for MCSP against comparator circuits., LIPIcs, Vol. 215, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pages 34:1-34:21
- Published
- 2021
48. On the computation of rational solutions of underdetermined systems over a finite field.
- Author
-
Giménez, Nardo, Matera, Guillermo, Pérez, Mariana, and Privitelli, Melina
- Subjects
- *
VECTOR spaces - Abstract
We design and analyze an algorithm for computing solutions with coefficients in a finite field F q of underdetermined systems defined over F q. The algorithm is based on reductions to zero-dimensional searches. The searches are performed on "vertical strips", namely parallel linear spaces of suitable dimension in a given direction. Our results show that, on average, less than three searches suffice to obtain a solution of the original system, with a probability of success which grows exponentially with the number of searches. The analysis of our algorithm relies on results on the probability that the solution set (over the algebraic closure of F q) of a random system with coefficients in F q satisfies certain geometric and algebraic properties which is of independent interest. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Quantum Pattern Matching Fast on Average.
- Author
-
Montanaro, Ashley
- Subjects
- *
PATTERN matching , *COMPUTER vision , *PATTERN recognition systems , *MATCHING theory , *PATTERNS (Mathematics) - Abstract
The d-dimensional pattern matching problem is to find an occurrence of a pattern of length $$m \times \dots \times m$$ within a text of length $$n \times \dots \times n$$ , with $$n \ge m$$ . This task models various problems in text and image processing, among other application areas. This work describes a quantum algorithm which solves the pattern matching problem for random patterns and texts in time $$\widetilde{O}((n/m)^{d/2} 2^{O(d^{3/2}\sqrt{\log m})})$$ . For large m this is super-polynomially faster than the best possible classical algorithm, which requires time $$\widetilde{\Omega }( n^{d/2} + (n/m)^d)$$ . The algorithm is based on the use of a quantum subroutine for finding hidden shifts in d dimensions, which is a variant of algorithms proposed by Kuperberg. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
50. A Note on Average-Case Sorting.
- Author
-
Moran, Shay and Yehudayoff, Amir
- Abstract
This note studies the average-case comparison-complexity of sorting n elements when there is a known distribution on inputs and the goal is to minimize the expected number of comparisons. We generalize Fredman's algorithm which is a variant of insertion sort and provide a basically tight upper bound: If μ is a distribution on permutations on n elements, then one may sort inputs from μ with expected number of comparisons that is at most H( μ) + 2 n, where H is the entropy function. The algorithm uses less comparisons for more probable inputs: For every permutation π, the algorithm sorts π by using at most $\log _{2}(\frac {1}{\Pr _{\mu }(\pi )})+2n$ comparisons. A lower bound on the expected number of comparisons of H( μ) always holds, and a linear dependence on n is also required. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.