72 results on '"Ilias Zadik"'
Search Results
2. Shapes and recession cones in mixed-integer convex representability.
3. Transfer Learning Beyond Bounded Density Ratios.
4. Sharp thresholds in inference of planted subgraphs.
5. Almost-Linear Planted Cliques Elude the Metropolis Process.
6. It Was 'All' for 'Nothing': Sharp Phase Transitions for Noiseless Discrete Channels.
7. Statistical and Computational Phase Transitions in Group Testing.
8. Sharp Thresholds Imply Circuit Lower Bounds: from random 2-SAT to Planted Clique.
9. Mixed-Integer Convex Representability.
10. Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks.
11. Group testing and local search: is there a computational-statistical gap?
12. On the Cryptographic Hardness of Learning Single Periodic Neurons.
13. Self-Regularity of Output Weights for Overparameterized Two-Layer Neural Networks.
14. The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics.
15. Archimedes Meets Privacy: On Privately Estimating Quantiles in High Dimensions Under Minimal Assumptions.
16. Free Energy Wells and Overlap Gap Property in Sparse PCA.
17. Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection.
18. Lattice-Based Methods Surpass Sum-of-Squares in Clustering.
19. Almost-Linear Planted Cliques Elude the Metropolis Process.
20. On the Second Kahn-Kalai Conjecture.
21. A second moment proof of the spread lemma.
22. The All-or-Nothing Phenomenon in Sparse Linear Regression.
23. A Simple Bound on the BER of the Map Decoder for Massive MIMO Systems.
24. Improved bounds on Gaussian MAC and sparse regression via Gaussian inequalities.
25. All-or-Nothing Phenomena: From Single-Letter to High Dimensions.
26. Optimal Private Median Estimation under Minimal Distributional Assumptions.
27. The All-or-Nothing Phenomenon in Sparse Tensor PCA.
28. Revealing Network Structure, Confidentially: Improved Rates for Node-Private Graphon Estimation.
29. High Dimensional Linear Regression using Lattice Basis Reduction.
30. Orthogonal Machine Learning: Power and Limitations.
31. It was 'all' for 'nothing': sharp phase transitions for noiseless discrete channels.
32. Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks.
33. Lattice-Based Methods Surpass Sum-of-Squares in Clustering.
34. High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition.
35. Mixed-Integer Convex Representability.
36. Neural Networks and Polynomial Regression. Demystifying the Overparametrization Phenomena.
37. Group testing and local search: is there a computational-statistical gap?
38. Stationary Points of Shallow Neural Networks with Quadratic Activation Function.
39. The Landscape of the Planted Clique Problem: Dense subgraphs and the Overlap Gap Property.
40. Shapes and recession cones in mixed-integer convex representability
41. Private Algorithms Can Always Be Extended.
42. Orthogonal Machine Learning: Power and Limitations.
43. Sparse high-dimensional linear regression. Estimating squared error and a phase transition
44. Inference in High-Dimensional Linear Regression via Lattice Basis Reduction and Integer Relation Detection
45. Improved bounds on Gaussian MAC and sparse regression via Gaussian inequalities
46. A Simple Bound on the BER of the Map Decoder for Massive MIMO Systems
47. All-or-Nothing Phenomena: From Single-Letter to High Dimensions
48. Revealing Network Structure, Confidentially: Improved Rates for Node-Private Graphon Estimation
49. A Note on the Density of Rational Functions in A ∞(Ω)
50. Padé approximants, density of rational functions in A∞(Ω) and smoothness of the integration operator
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.