1. Evaluating the influence of data collector training for predictive risk of death models: an observational study
- Author
-
Latesh Poojara, Debbie Odlum, Ashwin Subramaniam, Anders Aneman, Jinghang Luo, Anthony S. McLean, Michele Thomson, Stephen Huang, Adam Howard, Arvind Rajamani, Ramanathan Lakshmanan, Gordon Flynn, Thodur Vinodh Madapusi, Andrew Simpson, J. J. Gatward, and Raju Pusapati
- Subjects
medicine.medical_specialty ,Data collection ,business.industry ,Health Policy ,Multilevel model ,Australia ,Bayes Theorem ,Training effect ,01 natural sciences ,Intensive Care Units ,010104 statistics & probability ,03 medical and health sciences ,Survey methodology ,0302 clinical medicine ,030228 respiratory system ,Surveys and Questionnaires ,Intensive care ,medicine ,Humans ,Medical physics ,Observational study ,0101 mathematics ,Raw data ,business ,Quality assurance - Abstract
BackgroundSeverity-of-illness scoring systems are widely used for quality assurance and research. Although validated by trained data collectors, there is little data on the accuracy of real-world data collection practices.ObjectiveTo evaluate the influence of formal data collection training on the accuracy of scoring system data in intensive care units (ICUs).Study design and methodsQuality assurance audit conducted using survey methodology principles. Between June and December 2018, an electronic document with details of three fictitious ICU patients was emailed to staff from 19 Australian ICUs who voluntarily submitted data on a web-based data entry form. Their entries were used to generate severity-of-illness scores and risks of death (RoDs) for four scoring systems. The primary outcome was the variation of severity-of-illness scores and RoDs from a reference standard.Results50/83 staff (60.3%) submitted data. Using Bayesian multilevel analysis, severity-of-illness scores and RoDs were found to be significantly higher for untrained staff. The mean (95% high-density interval) overestimation in RoD due to training effect for patients 1, 2 and 3, respectively, were 0.24 (0.16, 0.31), 0.19 (0.09, 0.29) and 0.24 (0.1, 0.38) respectively (Bayesian factor >300, decisive evidence). Both groups (trained and untrained) had wide coefficients of variation up to 38.1%, indicating wide variability. Untrained staff made more errors in interpreting scoring system definitions.InterpretationIn a fictitious patient dataset, data collection staff without formal training significantly overestimated the severity-of-illness scores and RoDs compared with trained staff. Both groups exhibited wide variability. Strategies to improve practice may include providing adequate training for all data collection staff, refresher training for previously trained staff and auditing the raw data submitted by individual ICUs. The results of this simulated study need revalidation on real patients.
- Published
- 2020