59 results on '"Dmitry Kovalev"'
Search Results
2. Is Consensus Acceleration Possible in Decentralized Optimization over Slowly Time-Varying Networks?
3. Stochastic distributed learning with gradient quantization and double-variance reduction.
4. Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters.
5. Decentralized saddle-point problems with different constants of strong convexity and strong concavity.
6. An Optimal Algorithm for Strongly Convex Minimization under Affine Constraints.
7. Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks.
8. Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks.
9. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks.
10. A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!
11. Comparison of Data-Driven Approaches to Modeling Complex Behavior of 2D Liquid Simulator.
12. ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks.
13. Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop.
14. Revisiting Stochastic Extragradient.
15. From Local SGD to Local Fixed-Point Methods for Federated Learning.
16. Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization.
17. Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems.
18. IntSGD: Adaptive Floatless Compression of Stochastic Gradients.
19. Optimal Algorithms for Decentralized Stochastic Variational Inequalities.
20. Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling.
21. The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization.
22. Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity.
23. Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox.
24. The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization.
25. RSN: Randomized Subspace Newton.
26. Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates.
27. Similarity learning for wells based on logging data.
28. An Optimal Algorithm for Strongly Convex Min-min Optimization.
29. Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox.
30. Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
31. The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization.
32. On Scaled Methods for Saddle Point Problems.
33. Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity.
34. The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization.
35. Optimal Algorithms for Decentralized Stochastic Variational Inequalities.
36. Non-smooth setting of stochastic decentralized convex optimization problem over time-varying Graphs.
37. Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization.
38. Linearly Converging Error Compensated SGD.
39. Stochastic Spectral and Conjugate Descent Methods.
40. Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling.
41. Decentralized Distributed Optimization for Saddle Point Problems.
42. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks.
43. ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks.
44. IntSGD: Floatless Compression of Stochastic Gradients.
45. Search for Gender Difference in Functional Connectivity of Resting State fMRI.
46. Оценка качества научных гипотез в виртуальных экспериментах в областях с интенсивным использованием данных (Estimation of Scientific Hypotheses Quality in Virtual Experiments in Data Intensive Domains).
47. Linearly Converging Error Compensated SGD.
48. Fast Linear Convergence of Randomized BFGS.
49. From Local SGD to Local Fixed Point Methods for Federated Learning.
50. Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.