Back to Search Start Over

Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing

Authors :
Chen, Cheng
Kailkhura, Bhavya
Goldhahn, Ryan
Zhou, Yi
Chen, Cheng
Kailkhura, Bhavya
Goldhahn, Ryan
Zhou, Yi
Publication Year :
2021

Abstract

Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks. Although several heuristic defenses are proposed to enhance the robustness of federated learning, they do not provide certifiable robustness guarantees. In this paper, we incorporate randomized smoothing techniques into federated adversarial training to enable data-private distributed learning with certifiable robustness to test-time adversarial perturbations. Our experiments show that such an advanced federated adversarial learning framework can deliver models as robust as those trained by the centralized training. Further, this enables provably-robust classifiers to $\ell_2$-bounded adversarial perturbations in a distributed setup. We also show that one-point gradient estimation based training approach is $2-3\times$ faster than popular stochastic estimator based approach without any noticeable certified robustness differences.<br />Comment: 9 pages, 12 figures

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1269539083
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1109.MASS52906.2021.00032