1. Assessing inter- and intra-rater reliability of movement scores and the effects of body-shape using a custom visualisation tool: an exploratory study
- Author
-
Gwyneth B. Ross, Xiong Zhao, Nikolaus F. Troje, Steven L. Fischer, and Ryan B. Graham
- Subjects
Cohen’s kappa ,Bodyweight bias ,Movement screens ,Motion and shape capture from sparse markers (MoSh) ,Sports medicine ,RC1200-1245 - Abstract
Abstract Background The literature shows conflicting results regarding inter- and intra-rater reliability, even for the same movement screen. The purpose of this study was to assess inter- and intra-rater reliability of movement scores within and between sessions of expert assessors and the effects of body-shape on reliability during a movement screen using a custom online visualisation software. Methods Kinematic data from 542 athletes performing seven movement tasks were used to create animations (i.e., avatar representations) using motion and shape capture from sparse markers (MoSh). For each task, assessors viewed a total of 90 animations. Using a custom developed visualisation tool, expert assessors completed two identical sessions where they rated each animation on a scale of 1–10. The arithmetic mean of weighted Cohen’s kappa for each task and day were calculated to test reliability. Results Across tasks, inter-rater reliability ranged from slight to fair agreement and intra-rater reliability had slightly better reliability with slight to moderate agreement. When looking at the average kappa values, intra-rater reliability within session with and without body manipulation and between sessions were 0.45, 0.37, and 0.35, respectively. Conclusions Based on these results, supplementary or alternative methods should be explored and are likely required to increase scoring objectivity and reliability even within expert assessors. To help future research and practitioners, the custom visualisation software has been made available to the public.
- Published
- 2024
- Full Text
- View/download PDF