1. Privacy-Preserving Machine Learning (PPML) Inference for Clinically Actionable Models
- Author
-
Baris Balaban, Seyma Selcan Magara, Caglar Yilgor, Altug Yucekul, Ibrahim Obeid, Javier Pizones, Frank Kleinstueck, Francisco Javier Sanchez Perez-Grueso, Ferran Pellise, Ahmet Alanay, Erkay Savas, Cetin Bagci, and Osman Ugur Sezerman
- Subjects
Homomorphic Encryption ,Privacy-Preserving Machine Learning ,XGBoost ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Machine learning (ML) refers to algorithms (often models) that are learned directly from data, germane to past experience. As algorithms have constantly been evolving with the exponential increase of computing power and vastly generated data, privacy of algorithms as well as of data becomes extremely important due to regulations and IP rights. Therefore, it is vital to address privacy and security concerns of both data and model together with other performance metrics when commercializing machine learning models. Our aim is to show that privacy-preserving machine learning inference methods can safeguard the intellectual property of models and prevent plaintext models from disclosing information about the sensitive data employed in training these ML models. Additionally, these methods protect the confidentiality of model users’ sensitive patient data. We accomplish this by performing a security analysis to determine an appropriate query limit for each user, using the European Spine Study Group’s (ESSG) adult spinal deformity dataset. We implement a privacy-preserving tree-based machine learning inference and run two security scenarios (scenario A and scenario B) containing four parts with progressively increasing the number of synthetic data points, which are used to enhance the accuracy of the attacker’s substitute model. A target model is generated with particular operation site(s) in each scenario, and substitute models are built with nine-time threefold cross-validation using the XGBoost algorithm with the remaining sites’ data to assess the security of the target model. First, we create box plots of the test sets’ accuracy, sensitivity, precision, and F-score metrics to compare the substitute models’ performance with the target model. Second, we compare the gain values of the target and substitute models’ features. Third, we provide an in-depth analysis to check the inclusion of target model split points in substitute models with a heatmap. Finally, we compare the outputs of public and privacy-preserving models and report intermediate timing results. The privacy-preserving XGBoost model results are identical to the original plaintext model in the aforementioned two scenarios in terms of prediction accuracy. The differences between performance metrics of best-performing substitute models and target models are 0.27, 0.18, 0.25, 0.26 for scenario A, and 0.04, 0, 0.04, and 0.03 for scenario B for accuracy, sensitivity, precision, and F-score, respectively. The differences between target model accuracy and the mean accuracy values of models in each scenario on the substitute models’ test dataset are 0.38 for scenario A and 0.14 for scenario B. Based on our findings, we conclude that machine learning models (i.e., our target models) may contribute to the advancement in the field of application where they are deployed. Ensuring the security of both the model and the user data enables the protection of the intellectual property of ML models, preventing the leakage of sensitive information used in training and model users’ data.INDEX TERMS Homomorphic encryption, privacy-preserving machine learning, XGBoost.
- Published
- 2025
- Full Text
- View/download PDF