Back to Search
Start Over
Does Differentially Private Synthetic Data Lead to Synthetic Discoveries?
- Source :
-
Methods of information in medicine [Methods Inf Med] 2024 May; Vol. 63 (1-02), pp. 35-51. Date of Electronic Publication: 2024 Aug 13. - Publication Year :
- 2024
-
Abstract
- Background: Synthetic data have been proposed as a solution for sharing anonymized versions of sensitive biomedical datasets. Ideally, synthetic data should preserve the structure and statistical properties of the original data, while protecting the privacy of the individual subjects. Differential Privacy (DP) is currently considered the gold standard approach for balancing this trade-off.<br />Objectives: The aim of this study is to investigate how trustworthy are group differences discovered by independent sample tests from DP-synthetic data. The evaluation is carried out in terms of the tests' Type I and Type II errors. With the former, we can quantify the tests' validity, i.e., whether the probability of false discoveries is indeed below the significance level, and the latter indicates the tests' power in making real discoveries.<br />Methods: We evaluate the Mann-Whitney U test, Student's t -test, chi-squared test, and median test on DP-synthetic data. The private synthetic datasets are generated from real-world data, including a prostate cancer dataset ( n = 500) and a cardiovascular dataset ( n = 70,000), as well as on bivariate and multivariate simulated data. Five different DP-synthetic data generation methods are evaluated, including two basic DP histogram release methods and MWEM, Private-PGM, and DP GAN algorithms.<br />Conclusion: A large portion of the evaluation results expressed dramatically inflated Type I errors, especially at levels of ϵ ≤ 1. This result calls for caution when releasing and analyzing DP-synthetic data: low p -values may be obtained in statistical tests simply as a byproduct of the noise added to protect privacy. A DP Smoothed Histogram-based synthetic data generation method was shown to produce valid Type I error for all privacy levels tested but required a large original dataset size and a modest privacy budget ( ϵ ≥ 5) in order to have reasonable Type II error levels.<br />Competing Interests: None declared.<br /> (The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/).)
- Subjects :
- Humans
Privacy
Prostatic Neoplasms
Male
Algorithms
Subjects
Details
- Language :
- English
- ISSN :
- 2511-705X
- Volume :
- 63
- Issue :
- 1-02
- Database :
- MEDLINE
- Journal :
- Methods of information in medicine
- Publication Type :
- Academic Journal
- Accession number :
- 39137913
- Full Text :
- https://doi.org/10.1055/a-2385-1355