Search

Your search keyword '"Richtárik, Peter"' showing total 703 results

Search Constraints

Start Over You searched for: Author "Richtárik, Peter" Remove constraint Author: "Richtárik, Peter"
703 results on '"Richtárik, Peter"'

Search Results

301. Adaptive Learning Rates for Faster Stochastic Gradient Methods

302. Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning

303. Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox

304. Federated Random Reshuffling with Compression and Variance Reduction

305. FedShuffle: Recipes for Better Use of Local Work in Federated Learning

306. ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!

307. Optimal Algorithms for Decentralized Stochastic Variational Inequalities

308. FL_PyTorch: optimization research simulator for federated learning

309. 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation

310. DASHA: Distributed Nonconvex Optimization with Communication Compression, Optimal Oracle Complexity, and No Client Synchronization

311. BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression

312. Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization

313. A Note on the Convergence of Mirrored Stein Variational Gradient Descent under $(L_0,L_1)-$Smoothness Condition

314. Federated Optimization Algorithms with Random Reshuffling and Gradient Compression

315. Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling

317. RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates

318. A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate

321. FL_PyTorch

322. Stochastic distributed learning with gradient quantization and double-variance reduction.

323. Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques

324. A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1

325. MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization

328. Faster Rates for Compressed Federated Learning with Client-Variance Reduction

329. EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback

330. Distributed Second Order Methods with Fast Rates and Compressed Communication

331. Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling

332. Error Compensated Loopless SVRG, Quartz, and SDCA for Distributed Optimization

333. Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

334. FedNL: Making Newton-Type Methods Applicable to Federated Learning

335. An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints

336. FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning

337. Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning

338. Permutation Compressors for Provably Faster Distributed Nonconvex Optimization

339. Error Compensated Distributed SGD can be Accelerated

340. Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information

341. FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning

342. CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

343. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback

344. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks

345. Random Reshuffling with Variance Reduction: New Analysis and Better Rates

346. ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation

347. Hyperparameter Transfer Learning with Adaptive Complexity

348. AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods

349. IntSGD: Adaptive Floatless Compression of Stochastic Gradients

350. MARINA: Faster Non-Convex Distributed Learning with Compression

Catalog

Books, media, physical & digital resources