Back to Search
Start Over
Inter-rater reliability in the Paediatric Observation Priority Score (POPS).
- Source :
- Archives of Disease in Childhood; May2018, Vol. 103 Issue 5, p458-462, 5p
- Publication Year :
- 2018
-
Abstract
- <bold>Objective: </bold>The primary objective of this study was to determine the level of inter-rater reliability between nursing staff for the Paediatric Observation Priority Score (POPS).<bold>Design: </bold>Retrospective observational study.<bold>Setting: </bold>Single-centre paediatric emergency department.<bold>Participants: </bold>12 participants from a convenience sample of 21 nursing staff.<bold>Interventions: </bold>Participants were shown video footage of three pre-recorded paediatric assessments and asked to record their own POPS for each child. The participants were blinded to the original, in-person POPS. Further data were gathered in the form of a questionnaire to determine the level of training and experience the candidate had using the POPS score prior to undertaking this study.<bold>Main Outcome Measures: </bold>Inter-rater reliability among participants scoring of the POPS.<bold>Results: </bold>Overall kappa value for case 1 was 0.74 (95% CI 0.605 to 0.865), case 2 was 1 (perfect agreement) and case 3 was 0.66 (95% CI 0.58 to 0.744).<bold>Conclusion: </bold>This study suggests there is good inter-rater reliability between different nurses' use of POPS in assessing sick children in the emergency department. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00039888
- Volume :
- 103
- Issue :
- 5
- Database :
- Complementary Index
- Journal :
- Archives of Disease in Childhood
- Publication Type :
- Academic Journal
- Accession number :
- 129319258
- Full Text :
- https://doi.org/10.1136/archdischild-2017-314165