Back to Search Start Over

Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study

Authors :
Washington, Peter
Kalantarian, Haik
Kent, John
Husic, Arman
Kline, Aaron
Leblanc, Emilie
Hou, Cathy
Mutlu, Onur Cezmi
Dunlap, Kaitlyn
Penev, Yordan
Varma, Maya
Stockham, Nate Tyler
Chrisman, Brianna
Paskov, Kelley
Sun, Min Woo
Jung, Jae-Yoon
Voss, Catalin
Haber, Nick
Wall, Dennis Paul
Source :
JMIR pediatrics and parenting 5.2 (2022): e26760
Publication Year :
2020

Abstract

Background: Automated emotion classification could aid those who struggle to recognize emotions, including children with developmental behavioral conditions such as autism. However, most computer vision emotion recognition models are trained on adult emotion and therefore underperform when applied to child faces. Objective: We designed a strategy to gamify the collection and labeling of child emotion-enriched images to boost the performance of automatic child emotion recognition models to a level closer to what will be needed for digital health care approaches. Methods: We leveraged our prototype therapeutic smartphone game, GuessWhat, which was designed in large part for children with developmental and behavioral conditions, to gamify the secure collection of video data of children expressing a variety of emotions prompted by the game. Independently, we created a secure web interface to gamify the human labeling effort, called HollywoodSquares, tailored for use by any qualified labeler. We gathered and labeled 2155 videos, 39,968 emotion frames, and 106,001 labels on all images. With this drastically expanded pediatric emotion-centric database (>30 times larger than existing public pediatric emotion data sets), we trained a convolutional neural network (CNN) computer vision classifier of happy, sad, surprised, fearful, angry, disgust, and neutral expressions evoked by children. Results: The classifier achieved a 66.9% balanced accuracy and 67.4% F1-score on the entirety of the Child Affective Facial Expression (CAFE) as well as a 79.1% balanced accuracy and 78% F1-score on CAFE Subset A, a subset containing at least 60% human agreement on emotions labels. This performance is at least 10% higher than all previously developed classifiers evaluated against CAFE, the best of which reached a 56% balanced accuracy even when combining "anger" and "disgust" into a single class.

Details

Database :
arXiv
Journal :
JMIR pediatrics and parenting 5.2 (2022): e26760
Publication Type :
Report
Accession number :
edsarx.2012.08678
Document Type :
Working Paper