274 results on '"Andrea Montanari"'
Search Results
2. Scaling Training Data with Lossy Image Compression.
3. A Friendly Tutorial on Mean-Field Spin Glass Techniques for Non-Physicists.
4. Towards a statistical theory of data selection under weak supervision.
5. Compressing Tabular Data via Latent Variable Estimation.
6. High-dimensional logistic regression with missing data: Imputation, regularization, and universality.
7. Scaling Training Data with Lossy Image Compression.
8. Scaling laws for learning with real and surrogate data.
9. On Smale's 17th problem over the reals.
10. Which exceptional low-dimensional projections of a Gaussian point cloud can be found in polynomial time?
11. Local algorithms for maximum cut and minimum bisection on locally treelike regular graphs of large degree.
12. Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization.
13. Universality of empirical risk minimization.
14. High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural Networks.
15. An Information-Theoretic View of Stochastic Localization.
16. Underspecification Presents Challenges for Credibility in Modern Machine Learning.
17. Six Lectures on Linearized Neural Networks.
18. Towards a statistical theory of data selection under weak supervision.
19. Learning time-scales in two-layers neural networks.
20. Compressing Tabular Data via Latent Variable Estimation.
21. Sampling, Diffusions, and Stochastic Localization.
22. Learning with invariances in random features and kernel models.
23. Streaming Belief Propagation for Community Detection.
24. The estimation error of general first order methods.
25. Deep learning: a statistical viewpoint.
26. Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization.
27. Overparametrized linear dimensionality reductions: From projection pursuit to two-layer neural networks.
28. Universality of empirical risk minimization.
29. Adversarial Examples in Random Neural Networks with General Activations.
30. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit.
31. Optimization of the Sherrington-Kirkpatrick Hamiltonian.
32. Limitations of Lazy Training of Two-layers Neural Network.
33. On the Connection Between Learning Two-Layer Neural Networks and Tensor Decomposition.
34. An Instability in Variational Inference for Topic Models.
35. The threshold for SDP-refutation of random regular NAE-3SAT.
36. Fundamental Limits of Weak Recovery with Applications to Phase Retrieval.
37. Contextual Stochastic Block Models.
38. When Do Neural Networks Outperform Kernel Methods?
39. An Information-Theoretic View of Stochastic Localization.
40. Learning with invariances in random features and kernel models.
41. Minimum complexity interpolation in random features models.
42. Streaming Belief Propagation for Community Detection.
43. Local algorithms for Maximum Cut and Minimum Bisection on locally treelike regular graphs of large degree.
44. Deep learning: a statistical viewpoint.
45. Tractability from overparametrization: The example of the negative perceptron.
46. Fundamental Limits of Weak Recovery with Applications to Phase Retrieval.
47. Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality.
48. How well do local algorithms solve semidefinite programs?
49. Inference in Graphical Models via Semidefinite Programming Hierarchies.
50. Universality of the elastic net error.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.