1. Software-Defined GPU-CPU Empowered Efficient Wireless Federated Learning With Embedding Communication Coding for Beyond 5G
- Author
-
Zihong Li, Yang Hong, Ali Kashif Bashir, Yasser D. Al-Otaibi, and Jun Wu
- Subjects
Internet of Things ,wireless federated learning ,LDPC ,GPU-CPU ,Telecommunication ,TK5101-6720 ,Transportation and communications ,HE1-9990 - Abstract
Currently, with the widespread of the intelligent Internet of Things (IoT) in beyond 5G, wireless federated learning (WFL) has attracted a lot of attention to enable knowledge construction and sharing among a huge amount of distributed edge devices. However, under unstable wireless channel conditions, existing WFL schemes exist the following challenges: First, learning model parameters will be disturbed by bit errors because of interference and noise during wireless transmission, which will affect the training accuracy and the loss of the learning model. Second, traditional edge devices with CPU acceleration are inefficient due to the low throughout computation, especially in accelerating the encoding and decoding process during wireless transmission. Third, current hardware-level GPU acceleration methods cannot optimize complex operations, for instance, complex wireless coding in the WFL environment. To address the above challenges, we propose a software-defined GPU-CPU empowered efficient WFL architecture with embedding LDPC communication coding. Specifically, we embed wireless channel coding into the server weight aggregation and the client local training process respectively to resist interruptions in the learning process and design a GPU-CPU acceleration scheme for this architecture. The experimental results show its anti-interference ability and GPU-CPU acceleration ability during wireless transmission, which is 10 times the error control capability and 100 times faster than existing WFL schemes.
- Published
- 2023
- Full Text
- View/download PDF