Search

Your search keyword '"Richtárik, Peter"' showing total 701 results

Search Constraints

Start Over You searched for: Author "Richtárik, Peter" Remove constraint Author: "Richtárik, Peter"
701 results on '"Richtárik, Peter"'

Search Results

1. On the Convergence of DP-SGD with Adaptive Clipping

2. MARINA-P: Superior Performance in Non-smooth Federated Optimization with Adaptive Stepsizes

3. Differentially Private Random Block Coordinate Descent

4. Speeding up Stochastic Proximal Optimization in the High Hessian Dissimilarity Setting

5. Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization

6. Pushing the Limits of Large Language Model Quantization via the Linearity Theorem

7. Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

8. Tighter Performance Theory of FedExProx

9. Unlocking FedNL: Self-Contained Compute-Optimized Implementation

10. Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptation

11. MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times

12. On the Convergence of FedProx with Extrapolation and Inexact Prox

13. Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity

14. Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning

15. Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning

16. SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning

17. A Simple Linear Convergence Analysis of the Point-SAGA Algorithm

18. Local Curvature Descent: Squeezing More Curvature out of Standard and Polyak Gradient Descent

19. On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization

20. A Unified Theory of Stochastic Proximal Point Methods without Smoothness

21. MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence

22. Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations

23. Stochastic Proximal Point Methods for Monotone Inclusions under Expected Similarity

24. PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression

25. The Power of Extrapolation in Federated Learning

26. FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity

27. FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models

28. Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

29. LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression

30. Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants

31. Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity

32. Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity

33. Correlated Quantization for Faster Nonconvex Distributed Optimization

34. Kimad: Adaptive Gradient Compression with Bandwidth Awareness

35. Federated Learning is Better with Non-Homomorphic Encryption

36. Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences

37. Consensus-Based Optimization with Truncated Noise

38. Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates

39. Variance Reduced Distributed Non-Convex Optimization Using Matrix Stepsizes

40. High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise

41. Towards a Better Theoretical Understanding of Independent Subnetwork Training

42. Understanding Progressive Training Through the Framework of Randomized Coordinate Descent

43. Improving Accelerated Federated Learning with Compression and Importance Sampling

44. Clip21: Error Feedback for Gradient Clipping

45. Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees

46. A Guide Through the Zoo of Biased SGD

47. Error Feedback Shines when Features are Rare

48. Momentum Provably Improves Error Feedback!

49. Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning

50. Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization

Catalog

Books, media, physical & digital resources