Back to Search
Start Over
FedFQ: Federated Learning with Fine-Grained Quantization
- Publication Year :
- 2024
-
Abstract
- Federated learning (FL) is a decentralized approach, enabling multiple participants to collaboratively train a model while ensuring the protection of data privacy. The transmission of updates from numerous edge clusters to the server creates a significant communication bottleneck in FL. Quantization is an effective compression technology, showcasing immense potential in addressing this bottleneck problem. The Non-IID nature of FL renders it sensitive to quantization. Existing quantized FL frameworks inadequately balance high compression ratios and superior convergence performance by roughly employing a uniform quantization bit-width on the client-side. In this work, we propose a communication-efficient FL algorithm with a fine-grained adaptive quantization strategy (FedFQ). FedFQ addresses the trade-off between achieving high communication compression ratios and maintaining superior convergence performance by introducing parameter-level quantization. Specifically, we have designed a Constraint-Guided Simulated Annealing algorithm to determine specific quantization schemes. We derive the convergence of FedFQ, demonstrating its superior convergence performance compared to existing quantized FL algorithms. We conducted extensive experiments on multiple benchmarks and demonstrated that, while maintaining lossless performance, FedFQ achieves a compression ratio of 27 times to 63 times compared to the baseline experiment.
- Subjects :
- Computer Science - Distributed, Parallel, and Cluster Computing
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2408.08977
- Document Type :
- Working Paper