Search

Your search keyword '"Loizou, Nicolas"' showing total 48 results

Search Constraints

Start Over You searched for: Author "Loizou, Nicolas" Remove constraint Author: "Loizou, Nicolas"
48 results on '"Loizou, Nicolas"'

Search Results

1. Multiplayer Federated Learning: Reaching Equilibrium with Less Communication

2. Stochastic Polyak Step-sizes and Momentum: Convergence Guarantees and Practical Performance

3. Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization

4. Stochastic Extragradient with Random Reshuffling: Improved Convergence for Variational Inequalities

5. Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad

6. Locally Adaptive Federated Learning

7. Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates

8. Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions

9. A Unified Approach to Reinforcement Learning, Quantal Response Equilibria, and Two-Player Zero-Sum Games

10. Dynamics of SGD with Stochastic Polyak Stepsizes: Truly Adaptive Variants and Convergence to Exact Solution

11. Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods

12. Stochastic Extragradient: General Analysis and Improved Rates

13. Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

14. Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity

16. On the Convergence of Stochastic Extragradient for Bilinear Games using Restarted Iteration Averaging

17. Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth Games: Convergence Analysis under Expected Co-coercivity

18. AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods

19. Stochastic Hamiltonian Gradient Methods for Smooth Games

20. Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization

21. SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation

22. A Unified Theory of Decentralized SGD with Changing Topology and Local Updates

23. Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence

24. Randomized Iterative Methods for Linear Systems: Momentum, Inexactness and Gossip

25. Revisiting Randomized Gossip Algorithms: General Framework, Convergence Rates and Novel Block and Accelerated Protocols

26. Convergence Analysis of Inexact Randomized Iterative Methods

27. A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion

28. SGD: General Analysis and Improved Rates

29. Stochastic Gradient Push for Distributed Deep Learning

30. Provably Accelerated Randomized Gossip Algorithms

31. Accelerated Gossip via Stochastic Heavy Ball Method

32. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

33. Linearly convergent stochastic heavy ball method for minimizing generalization error

34. Privacy Preserving Randomized Gossip Algorithms

35. A New Perspective on Randomized Gossip Algorithms

36. Distributionally Robust Games with Risk-averse Players

37. Distributionally Robust Game Theory

40. Locally Adaptive Federated Learning via Stochastic Polyak Stepsizes

42. SGD: General Analysis and Improved Rates

48. A Unified Theory of Decentralized SGD with Changing Topology and Local Updates

Catalog

Books, media, physical & digital resources