Haberfehlner, Helga (author), van de Ven, Shankara S. (author), van der Burg, Sven A. (author), Huber, Florian (author), Georgievska, Sonja (author), Aleo, Ignazio (author), Harlaar, J. (author), Bonouvrié, Laura A. (author), van der Krogt, Marjolein M. (author), Buizer, Annemieke I. (author), Haberfehlner, Helga (author), van de Ven, Shankara S. (author), van der Burg, Sven A. (author), Huber, Florian (author), Georgievska, Sonja (author), Aleo, Ignazio (author), Harlaar, J. (author), Bonouvrié, Laura A. (author), van der Krogt, Marjolein M. (author), and Buizer, Annemieke I. (author)
Introduction: Video-based clinical rating plays an important role in assessing dystonia and monitoring the effect of treatment in dyskinetic cerebral palsy (CP). However, evaluation by clinicians is time-consuming, and the quality of rating is dependent on experience. The aim of the current study is to provide a proof-of-concept for a machine learning approach to automatically assess scoring of dystonia using 2D stick figures extracted from videos. Model performance was compared to human performance. Methods: A total of 187 video sequences of 34 individuals with dyskinetic CP (8–23 years, all non-ambulatory) were filmed at rest during lying and supported sitting. Videos were scored by three raters according to the Dyskinesia Impairment Scale (DIS) for arm and leg dystonia (normalized scores ranging from 0–1). Coordinates in pixels of the left and right wrist, elbow, shoulder, hip, knee and ankle were extracted using DeepLabCut, an open source toolbox that builds on a pose estimation algorithm. Within a subset, tracking accuracy was assessed for a pretrained human model and for models trained with an increasing number of manually labeled frames. The mean absolute error (MAE) between DeepLabCut’s prediction of the position of body points and manual labels was calculated. Subsequently, movement and position features were calculated from extracted body point coordinates. These features were fed into a Random Forest Regressor to train a model to predict the clinical scores. The model performance trained with data from one rater evaluated by MAEs (model-rater) was compared to inter-rater accuracy. Results: A tracking accuracy of 4.5 pixels (approximately 1.5 cm) could be achieved by adding 15–20 manually labeled frames per video. The MAEs for the trained models ranged from 0.21 ± 0.15 for arm dystonia to 0.14 ± 0.10 for leg dystonia (normalized DIS scores). The inter-rater MAEs were 0.21 ± 0.22 and 0.16 ± 0.20, respectively. Conclusion: This proof-of-concept study show, Biomechatronics & Human-Machine Control