Search

Showing total 302 results

Search Constraints

Start Over You searched for: Topic computer science - distributed, parallel, and cluster computing Remove constraint Topic: computer science - distributed, parallel, and cluster computing Topic computer science - machine learning Remove constraint Topic: computer science - machine learning Topic mathematics - optimization and control Remove constraint Topic: mathematics - optimization and control Publication Type Reports Remove constraint Publication Type: Reports
302 results

Search Results

1. Local Methods with Adaptivity via Scaling

2. The Privacy Power of Correlated Noise in Decentralized Learning

3. Estimation Network Design framework for efficient distributed optimization

4. FADAS: Towards Federated Adaptive Asynchronous Optimization

5. A New Theoretical Perspective on Data Heterogeneity in Federated Optimization

6. AGD: an Auto-switchable Optimizer using Stepwise Gradient Difference for Preconditioning Matrix

7. Online Distributed Learning with Quantized Finite-Time Coordination

8. Towards Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning Approach

9. Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes

10. The Limits and Potentials of Local SGD for Distributed Heterogeneous Learning with Intermittent Communication

11. Distributed Fractional Bayesian Learning for Adaptive Optimization

12. Mobilizing Personalized Federated Learning in Infrastructure-Less and Heterogeneous Environments via Random Walk Stochastic ADMM

13. Communication Efficient Distributed Training with Distributed Lion

14. On Distributed Larger-Than-Memory Subset Selection With Pairwise Submodular Functions

15. Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities

16. Beyond spectral gap (extended): The role of the topology in decentralized learning

17. A Single-Loop Algorithm for Decentralized Bilevel Optimization

18. High-Performance Hybrid Algorithm for Minimum Sum-of-Squares Clustering of Infinitely Tall Data

19. Asynchronous SGD on Graphs: a Unified Framework for Asynchronous Decentralized and Federated Optimization

20. Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity

21. FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup for Non-IID Data

22. Edge Generation Scheduling for DAG Tasks Using Deep Reinforcement Learning

23. Stochastic Controlled Averaging for Federated Learning with Communication Compression

24. UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming

25. Federated Minimax Optimization: Improved Convergence Analyses and Algorithms

26. FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction

27. CONGO: Compressive Online Gradient Optimization with Application to Microservices Management

28. Accelerating Distributed Optimization: A Primal-Dual Perspective on Local Steps

29. Decentralized Multi-Task Stochastic Optimization With Compressed Communications

30. Understanding How Consistency Works in Federated Learning via Stage-wise Relaxed Initialization

31. A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging

32. SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning

33. Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?

34. Towards More Suitable Personalization in Federated Learning via Decentralized Partial Model Training

35. Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression

36. Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

37. FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component Analysis

38. Communication-efficient SGD: From Local SGD to One-Shot Averaging

39. AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks

40. Subspace based Federated Unlearning

41. FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy

42. Sparse-SignSGD with Majority Vote for Communication-Efficient Distributed Learning

43. FedDA: Faster Framework of Local Adaptive Gradient Methods via Restarted Dual Averaging

44. Decentralized Riemannian Algorithm for Nonconvex Minimax Problems

45. CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence

46. Distributed Principal Subspace Analysis for Partitioned Big Data: Algorithms, Analysis, and Implementation

47. Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity

48. Communication-efficient Vertical Federated Learning via Compressed Error Feedback

49. Decentralized Optimization in Time-Varying Networks with Arbitrary Delays

50. Decentralized Directed Collaboration for Personalized Federated Learning