Back to Search Start Over

Transformer Models and Convolutional Networks with Different Activation Functions for Swallow Classification Using Depth Video Data.

Authors :
Lai, Derek Ka-Hei
Cheng, Ethan Shiu-Wang
So, Bryan Pak-Hei
Mao, Ye-Jiao
Cheung, Sophia Ming-Yan
Cheung, Daphne Sze Ki
Wong, Duo Wai-Chi
Cheung, James Chung-Wai
Source :
Mathematics (2227-7390). Jul2023, Vol. 11 Issue 14, p3081. 22p.
Publication Year :
2023

Abstract

Dysphagia is a common geriatric syndrome that might induce serious complications and death. Standard diagnostics using the Videofluoroscopic Swallowing Study (VFSS) or Fiberoptic Evaluation of Swallowing (FEES) are expensive and expose patients to risks, while bedside screening is subjective and might lack reliability. An affordable and accessible instrumented screening is necessary. This study aimed to evaluate the classification performance of Transformer models and convolutional networks in identifying swallowing and non-swallowing tasks through depth video data. Different activation functions (ReLU, LeakyReLU, GELU, ELU, SiLU, and GLU) were then evaluated on the best-performing model. Sixty-five healthy participants (n = 65) were invited to perform swallowing (eating a cracker and drinking water) and non-swallowing tasks (a deep breath and pronouncing vowels: "/eɪ/", "/iː/", "/aɪ/", "/oʊ/", "/u:/"). Swallowing and non-swallowing were classified by Transformer models (TimeSFormer, Video Vision Transformer (ViViT)), and convolutional neural networks (SlowFast, X3D, and R(2+1)D), respectively. In general, convolutional neural networks outperformed the Transformer models. X3D was the best model with good-to-excellent performance (F1-score: 0.920; adjusted F1-score: 0.885) in classifying swallowing and non-swallowing conditions. Moreover, X3D with its default activation function (ReLU) produced the best results, although LeakyReLU performed better in deep breathing and pronouncing "/aɪ/" tasks. Future studies shall consider collecting more data for pretraining and developing a hyperparameter tuning strategy for activation functions and the high dimensionality video data for Transformer models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
22277390
Volume :
11
Issue :
14
Database :
Academic Search Index
Journal :
Mathematics (2227-7390)
Publication Type :
Academic Journal
Accession number :
169713114
Full Text :
https://doi.org/10.3390/math11143081