1. Data Quality of Longitudinally Collected Patient-Reported Outcomes After Thoracic Surgery: Comparison of Paper- and Web-Based Assessments
- Author
-
Yuxian Nie, Qiuling Shi, Xing Wei, Wei Dai, Qingsong Yu, Wei Xu, Hongfan Yu, and Yang Pu
- Subjects
medicine.medical_specialty ,Health Informatics ,law.invention ,Randomized controlled trial ,law ,Internal medicine ,medicine ,Humans ,data quality ,Patient Reported Outcome Measures ,Lung cancer ,Generalized estimating equation ,Internet ,Original Paper ,business.industry ,Incidence (epidemiology) ,Thoracic Surgery ,patient-reported outcome (PRO) ,medicine.disease ,postoperative care ,Data Accuracy ,Clinical research ,Cardiothoracic surgery ,Data quality ,Quality of Life ,symptoms ,Observational study ,MDASI-LC ,business - Abstract
Background High-frequency patient-reported outcome (PRO) assessments are used to measure patients' symptoms after surgery for surgical research; however, the quality of those longitudinal PRO data has seldom been discussed. Objective The aim of this study was to determine data quality-influencing factors and to profile error trajectories of data longitudinally collected via paper-and-pencil (P&P) or web-based assessment (electronic PRO [ePRO]) after thoracic surgery. Methods We extracted longitudinal PRO data with 678 patients scheduled for lung surgery from an observational study (n=512) and a randomized clinical trial (n=166) on the evaluation of different perioperative care strategies. PROs were assessed by the MD Anderson Symptom Inventory Lung Cancer Module and single-item Quality of Life Scale before surgery and then daily after surgery until discharge or up to 14 days of hospitalization. Patient compliance and data error were identified and compared between P&P and ePRO. Generalized estimating equations model and 2-piecewise model were used to describe trajectories of error incidence over time and to identify the risk factors. Results Among 678 patients, 629 with at least 2 PRO assessments, 440 completed 3347 P&P assessments and 189 completed 1291 ePRO assessments. In total, 49.4% of patients had at least one error, including (1) missing items (64.69%, 1070/1654), (2) modifications without signatures (27.99%, 463/1654), (3) selection of multiple options (3.02%, 50/1654), (4) missing patient signatures (2.54%, 42/1654), (5) missing researcher signatures (1.45%, 24/1654), and (6) missing completion dates (0.30%, 5/1654). Patients who completed ePRO had fewer errors than those who completed P&P assessments (ePRO: 30.2% [57/189] vs. P&P: 57.7% [254/440]; P Conclusions It is possible to improve data quality of longitudinally collected PRO through ePRO, compared with P&P. However, ePRO-related sampling bias needs to be considered when designing clinical research using longitudinal PROs as major outcomes.
- Published
- 2021