1. Privacy-Preserving Deep Learning NLP Models for Cancer Registries
- Author
-
Brent J. Mumphrey, Linda Coyle, Georgia D. Tourassi, Shang Gao, Mohammed Alawad, Eric B. Durbin, Jong Cheol Jeong, David Rust, Lynne Penberthy, Xiao-Cheng Wu, Hong-Jun Yoon, and Isaac Hands
- Subjects
Vocabulary ,education.field_of_study ,business.industry ,Computer science ,media_common.quotation_subject ,Deep learning ,Population ,computer.software_genre ,Convolutional neural network ,Article ,Computer Science Applications ,Data modeling ,Human-Computer Interaction ,Data sharing ,Information extraction ,Computer Science (miscellaneous) ,Artificial intelligence ,Transfer of learning ,business ,education ,computer ,Natural language processing ,Information Systems ,media_common - Abstract
Population cancer registries can benefit from Deep Learning (DL) to automatically extract cancer characteristics from the high volume of unstructured pathology text reports they process annually. The success of DL to tackle this and other real-world problems is proportional to the availability of large labeled datasets for model training. Although collaboration among cancer registries is essential to fully exploit the promise of DL, privacy and confidentiality concerns are main obstacles for data sharing across cancer registries. Moreover, DL for natural language processing (NLP) requires sharing a vocabulary dictionary for the embedding layer which may contain patient identifiers. Thus, even distributing the trained models across cancer registries causes a privacy violation issue. In this article, we propose DL NLP model distribution via privacy-preserving transfer learning approaches without sharing sensitive data. These approaches are used to distribute a multitask convolutional neural network (MT-CNN) NLP model among cancer registries. The model is trained to extract six key cancer characteristics – tumor site, subsite, laterality, behavior, histology, and grade – from cancer pathology reports. Using 410,064 pathology documents from two cancer registries, we compare our proposed approach to conventional transfer learning without privacy-preserving, single-registry models, and a model trained on centrally hosted data. The results show that transfer learning approaches including data sharing and model distribution outperform significantly the single-registry model. In addition, the best performing privacy-preserving model distribution approach achieves statistically indistinguishable average micro- and macro-F1 scores across all extraction tasks (0.823,0.580) as compared to the centralized model (0.827,0.585).
- Published
- 2022