Back to Search Start Over

Association of Data Integration Technologies With Intensive Care Clinician Performance

Authors :
Lin, Ying Ling
Trbovich, Patricia
Kolodzey, Lauren
Nickel, Cheri
Guerguerian, Anne-Marie
Source :
JAMA Network Open
Publication Year :
2019
Publisher :
American Medical Association, 2019.

Abstract

Key Points Question Do data integration and visualization technologies alleviate clinicians’ cognitive workload and alter decision-making performance? Findings In this systematic review and meta-analysis of 20 studies, data integration and visualization technologies were associated with improvements in self-reported performance, mental and temporal demand, and effort compared with paper-based recording systems, but no specific type is superior to others. Only 10% of studies of data integration and visualization technology evaluated them in clinical settings. Meaning Data integration and visualization technologies offer promising features to improve decision making by clinicians in the intensive care setting, but standardized test protocols are needed to generate clinician-centered evaluations and accelerate screening of technologies that support data-driven decision making.<br />This systematic review and meta-analysis summarizes the published evidence on the association of user-centered data integration and visualization technologies with intensive care clinician performance.<br />Importance Sources of data in the intensive care setting are increasing exponentially, but the benefits of displaying multiparametric, high-frequency data are unknown. Decision making may not benefit from this technology if clinicians remain cognitively overburdened by poorly designed data integration and visualization technologies (DIVTs). Objective To systematically review and summarize the published evidence on the association of user-centered DIVTs with intensive care clinician performance. Data Sources MEDLINE, Embase, Cochrane Central Register of Controlled Trials, PsycINFO, and Web of Science were searched in May 2014 and January 2018. Study Selection Studies had 3 requirements: (1) the study tested a viable DIVT, (2) participants involved were intensive care clinicians, and (3) the study reported quantitative results associated with decision making in an intensive care setting. Data Extraction and Synthesis Of 252 records screened, 20 studies, published from 2004 to 2016, were included. The human factors framework to assess health technologies was applied to measure study completeness, and the Quality Assessment Instrument was used to assess the quality of the studies. PRISMA guidelines were adapted to conduct the systematic review and meta-analysis. Main Outcomes and Measures Study completeness and quality; clinician performance; physical, mental, and temporal demand; effort; frustration; time to decision; and decision accuracy. Results Of the 20 included studies, 16 were experimental studies with 410 intensive care clinician participants and 4 were survey-based studies with 1511 respondents. Scores for study completeness ranged from 27 to 43, with a maximum score of 47, and scores for study quality ranged from 46 to 79, with a maximum score of 90. Of 20 studies, DIVTs were evaluated in clinical settings in 2 studies (10%); time to decision was measured in 14 studies (70%); and decision accuracy was measured in 11 studies (55%). Measures of cognitive workload pooled in the meta-analysis suggested that any DIVT was an improvement over paper-based data in terms of self-reported performance, mental and temporal demand, and effort. With a maximum score of 22, median (IQR) mental demand scores for electronic display were 10 (7-13), tabular display scores were 8 (6.0-11.5), and novel visualization scores were 8 (6-12), compared with 17 (14-19) for paper. The median (IQR) temporal demand scores were also lower for all electronic visualizations compared with paper, with scores of 8 (6-11) for electronic display, 7 (6-11) for tabular and bar displays, 7 (5-11) for novel visualizations, and 16 (14.3-19.0) for paper. The median (IQR) performance scores improved for all electronic visualizations compared with paper (lower score indicates better self-reported performance), with scores of 6 (3-11) for electronic displays, 6 (4-11) for tabular and bar displays, 6 (4-11) for novel visualizations, and 14 (11-16) for paper. Frustration and physical demand domains of cognitive workload did not change, and differences between electronic displays were not significant. Conclusions and Relevance This review suggests that DIVTs are associated with increased integration and consistency of data. Much work remains to identify which visualizations effectively reduce cognitive workload to enhance decision making based on intensive care data. Standardizing human factors testing by developing a repository of open access benchmarked test protocols, using a set of outcome measures, scenarios, and data sets, may accelerate the design and selection of the most appropriate DIVT.

Details

Language :
English
ISSN :
25743805
Volume :
2
Issue :
5
Database :
OpenAIRE
Journal :
JAMA Network Open
Accession number :
edsair.pmid..........064f3b6cd9c38d7aa5b49e78ea59402e