Back to Search Start Over

The reliability of assessing the appropriateness of requested diagnostic tests

Authors :
Rianne Bindels
Jan W. J. van Wersch
P Pop
Ron Winkens
Arie Hasman
Medical Informatics
Signal Processing Systems
Medical signal processing
Source :
Medical decision making, 23(1), 31-37. SAGE Publications Inc., Medical Decision Making, 23(1), 31-37. SAGE Publications Ltd
Publication Year :
2003

Abstract

Despite a poor reliability, peer assessment is the traditional method to assess the appropriateness of health care activities. This article describes the reliability of the human assessment of the appropriateness of diagnostic tests requests. The authors used a random selection of 1217 tests from 253 request forms submitted by general practitioners in the Maastricht region of the Netherlands. Three reviewers independently assessed the appropriateness of each requested test. Interrater kappa values ranged from 0.33 to 0.42, and kappa values of intrarater agreement ranged from 0.48 to 0.68. The joint reliability coefficient of the 3 reviewers was 0.66. This reliability is sufficient to review test ordering over a series of cases but is not sufficient to make case-by-case assessments. Sixteen reviewers are needed to obtain a joint reliability of 0.95. The authors conclude that there is substantial variation in assessment concerning what is an appropriately requested diagnostic test and that this feedback method is not reliable enough to make a case-by-case assessment. Computer support may be beneficial to support and make the process of peer review more uniform.

Details

Language :
English
ISSN :
0272989X
Volume :
23
Issue :
1
Database :
OpenAIRE
Journal :
Medical Decision Making
Accession number :
edsair.doi.dedup.....8210c79a86e3841db1f98d52109428e8