Back to Search Start Over

FedEAT: A Robustness Optimization Framework for Federated LLMs

Authors :
Pang, Yahao
Wu, Xingyuan
Zhang, Xiaojin
Chen, Wei
Jin, Hai
Publication Year :
2025

Abstract

Significant advancements have been made by Large Language Models (LLMs) in the domains of natural language understanding and automated content creation. However, they still face persistent problems, including substantial computational costs and inadequate availability of training data. The combination of Federated Learning (FL) and LLMs (federated LLMs) offers a solution by leveraging distributed data while protecting privacy, which positions it as an ideal choice for sensitive domains. However, Federated LLMs still suffer from robustness challenges, including data heterogeneity, malicious clients, and adversarial attacks, which greatly hinder their applications. We first introduce the robustness problems in federated LLMs, to address these challenges, we propose FedEAT (Federated Embedding space Adversarial Training), a novel framework that applies adversarial training in the embedding space of client LLM and employs a robust aggregation approach, specifically geometric median aggregation, to enhance the robustness of Federated LLMs. Our experiments demonstrate that FedEAT effectively improves the robustness of Federated LLMs with minimal performance loss.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.11863
Document Type :
Working Paper