Rudolf N. Cardinal, Natalia Viani, Stephen Puntis, Julia Ive, Sumithra Velupillai, Joyce Kam, Robert Stewart, Lucia Yin, Angus Roberts, Somain Verma, Ive, Julia [0000-0002-3931-3392], Viani, Natalia [0000-0003-2205-2322], Yin, Lucia [0000-0002-0814-5197], Verma, Somain [0000-0003-4756-8675], Puntis, Stephen [0000-0003-4397-2435], Cardinal, Rudolf N. [0000-0002-8751-5167], Roberts, Angus [0000-0002-4570-9801], Stewart, Robert [0000-0002-4435-6397], Velupillai, Sumithra [0000-0002-4178-2980], Apollo - University of Cambridge Repository, and Cardinal, Rudolf N [0000-0002-8751-5167]
Funder: EPSRC Healtex Feasibility Funding (grant EP/N027280/1): "Towards Shareable Data in Clinical Natural Language Processing: Generating Synthetic Electronic Health Records", Funder: National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London, Funder: National Institute for Health Research Post Doctoral Fellowship award (grant number PDF-2017-10-029), Funder: Health Data Research UK, Funder: Swedish Research Council(2015-00359)/the Marie Sklodowska Curie Actions, A serious obstacle to the development of Natural Language Processing (NLP) methods in the clinical domain is the accessibility of textual data. The mental health domain is particularly challenging, partly because clinical documentation relies heavily on free text that is difficult to de-identify completely. This problem could be tackled by using artificial medical data. In this work, we present an approach to generate artificial clinical documents. We apply this approach to discharge summaries from a large mental healthcare provider and discharge summaries from an intensive care unit. We perform an extensive intrinsic evaluation where we (1) apply several measures of text preservation; (2) measure how much the model memorises training data; and (3) estimate clinical validity of the generated text based on a human evaluation task. Furthermore, we perform an extrinsic evaluation by studying the impact of using artificial text in a downstream NLP text classification task. We found that using this artificial data as training data can lead to classification results that are comparable to the original results. Additionally, using only a small amount of information from the original data to condition the generation of the artificial data is successful, which holds promise for reducing the risk of these artificial data retaining rare information from the original data. This is an important finding for our long-term goal of being able to generate artificial clinical data that can be released to the wider research community and accelerate advances in developing computational methods that use healthcare data.