Back to Search Start Over

FedMed-GAN: Federated Domain Translation on Unsupervised Cross-Modality Brain Image Synthesis

Authors :
Wang, Jinbao
Xie, Guoyang
Huang, Yawen
Lyu, Jiayi
Zheng, Yefeng
Zheng, Feng
Jin, Yaochu
Publication Year :
2022

Abstract

Utilizing multi-modal neuroimaging data has been proved to be effective to investigate human cognitive activities and certain pathologies. However, it is not practical to obtain the full set of paired neuroimaging data centrally since the collection faces several constraints, e.g., high examination cost, long acquisition time, and image corruption. In addition, these data are dispersed into different medical institutions and thus cannot be aggregated for centralized training considering the privacy issues. There is a clear need to launch a federated learning and facilitate the integration of the dispersed data from different institutions. In this paper, we propose a new benchmark for federated domain translation on unsupervised brain image synthesis (termed as FedMed-GAN) to bridge the gap between federated learning and medical GAN. FedMed-GAN mitigates the mode collapse without sacrificing the performance of generators, and is widely applied to different proportions of unpaired and paired data with variation adaptation property. We treat the gradient penalties by federally averaging algorithm and then leveraging differential privacy gradient descent to regularize the training dynamics. A comprehensive evaluation is provided for comparing FedMed-GAN and other centralized methods, which shows the new state-of-the-art performance by our FedMed-GAN. Our code has been released on the website: https://github.com/M-3LAB/FedMed-GAN

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.08953
Document Type :
Working Paper