139 results on '"Andrea Montanari"'
Search Results
2. Scaling Training Data with Lossy Image Compression.
3. Scaling laws for learning with real and surrogate data.
4. On Smale's 17th problem over the reals.
5. Which exceptional low-dimensional projections of a Gaussian point cloud can be found in polynomial time?
6. Six Lectures on Linearized Neural Networks.
7. Towards a statistical theory of data selection under weak supervision.
8. Learning time-scales in two-layers neural networks.
9. Compressing Tabular Data via Latent Variable Estimation.
10. Sampling, Diffusions, and Stochastic Localization.
11. Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization.
12. Overparametrized linear dimensionality reductions: From projection pursuit to two-layer neural networks.
13. Universality of empirical risk minimization.
14. Adversarial Examples in Random Neural Networks with General Activations.
15. An Information-Theoretic View of Stochastic Localization.
16. Learning with invariances in random features and kernel models.
17. Minimum complexity interpolation in random features models.
18. Streaming Belief Propagation for Community Detection.
19. Local algorithms for Maximum Cut and Minimum Bisection on locally treelike regular graphs of large degree.
20. Deep learning: a statistical viewpoint.
21. Tractability from overparametrization: The example of the negative perceptron.
22. When Do Neural Networks Outperform Kernel Methods?
23. The Lasso with general Gaussian designs with applications to hypothesis testing.
24. The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training.
25. The estimation error of general first order methods.
26. Underspecification Presents Challenges for Credibility in Modern Machine Learning.
27. Linearized two-layers neural networks in high dimension.
28. Surprises in High-Dimensional Ridgeless Least Squares Interpolation.
29. Analysis of a Two-Layer Neural Network via Displacement Convexity.
30. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit.
31. On the computational tractability of statistical estimation on amenable graphs.
32. Limitations of Lazy Training of Two-layers Neural Networks.
33. On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition.
34. Contextual Stochastic Block Models.
35. A Mean Field View of the Landscape of Two-Layers Neural Networks.
36. The threshold for SDP-refutation of random regular NAE-3SAT.
37. Adapting to Unknown Noise Distribution in Matrix Denoising.
38. Group Synchronization on Grids.
39. Learning Combinations of Sigmoids Through Gradient Estimation.
40. Inference in Graphical Models via Semidefinite Programming Hierarchies.
41. Fundamental Limits of Weak Recovery with Applications to Phase Retrieval.
42. Non-negative Matrix Factorization via Archetypal Analysis.
43. State Evolution for Approximate Message Passing with Non-Separable Functions.
44. Online Rules for Control of False Discovery Rate and False Discovery Exceedance.
45. How Well Do Local Algorithms Solve Semidefinite Programs?
46. Performance of a community detection algorithm based on semidefinite programming.
47. Spectral algorithms for tensor completion.
48. A Perspective on Future Research Directions in Information Theory.
49. Extremal Cuts of Sparse Random Graphs.
50. Finding One Community in a Sparse Graph.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.