Search

Your search keyword '"Richtarik, Peter"' showing total 335 results

Search Constraints

Start Over You searched for: Author "Richtarik, Peter" Remove constraint Author: "Richtarik, Peter"
335 results on '"Richtarik, Peter"'

Search Results

51. Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity

52. Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes

53. Can 5th Generation Local Training Methods Support Client Sampling? Yes!

54. Adaptive Compression for Communication-Efficient Distributed Training

55. A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate

56. GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

57. Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression

58. Improved Stein Variational Gradient Descent with Importance Weights

59. EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression

60. Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization

61. Personalized Federated Learning with Communication Compression

62. Adaptive Learning Rates for Faster Stochastic Gradient Methods

63. RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates

64. Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning

65. Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox

66. Shifted Compression Framework: Generalizations and Improvements

67. A Note on the Convergence of Mirrored Stein Variational Gradient Descent under $(L_0,L_1)-$Smoothness Condition

68. Federated Optimization Algorithms with Random Reshuffling and Gradient Compression

69. Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation

70. Certified Robustness in Federated Learning

71. Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling

72. Federated Learning with a Sampling Algorithm under Isoperimetry

73. Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top

74. Convergence of Stein Variational Gradient Descent under a Weaker Smoothness Condition

75. A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting

76. EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization

77. Federated Random Reshuffling with Compression and Variance Reduction

78. FedShuffle: Recipes for Better Use of Local Work in Federated Learning

79. ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!

80. FL_PyTorch: optimization research simulator for federated learning

81. Optimal Algorithms for Decentralized Stochastic Variational Inequalities

82. DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization

83. 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation

84. BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression

85. Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization

86. Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling

87. Faster Rates for Compressed Federated Learning with Client-Variance Reduction

88. FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

89. Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning

90. Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

91. Permutation Compressors for Provably Faster Distributed Nonconvex Optimization

92. EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback

93. Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization

94. Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information

95. FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

96. CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

97. A Field Guide to Federated Optimization

98. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback

99. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks

100. Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques

Catalog

Books, media, physical & digital resources