Back to Search
Start Over
An automatic method to detect and track the glottal gap from high speed videoendoscopic images.
- Source :
-
Biomedical engineering online [Biomed Eng Online] 2015 Oct 29; Vol. 14, pp. 100. Date of Electronic Publication: 2015 Oct 29. - Publication Year :
- 2015
-
Abstract
- Background: The image-based analysis of the vocal folds vibration plays an important role in the diagnosis of voice disorders. The analysis is based not only on the direct observation of the video sequences, but also in an objective characterization of the phonation process by means of features extracted from the recorded images. However, such analysis is based on a previous accurate identification of the glottal gap, which is the most challenging step for a further automatic assessment of the vocal folds vibration.<br />Methods: In this work, a complete framework to automatically segment and track the glottal area (or glottal gap) is proposed. The algorithm identifies a region of interest that is adapted along time, and combine active contours and watershed transform for the final delineation of the glottis and also an automatic procedure for synthesize different videokymograms is proposed.<br />Results: Thanks to the ROI implementation, our technique is robust to the camera shifting and also the objective test proved the effectiveness and performance of the approach in the most challenging scenarios that it is when exist an inappropriate closure of the vocal folds.<br />Conclusions: The novelties of the proposed algorithm relies on the used of temporal information for identify an adaptive ROI and the use of watershed merging combined with active contours for the glottis delimitation. Additionally, an automatic procedure for synthesize multiline VKG by the identification of the glottal main axis is developed.
Details
- Language :
- English
- ISSN :
- 1475-925X
- Volume :
- 14
- Database :
- MEDLINE
- Journal :
- Biomedical engineering online
- Publication Type :
- Academic Journal
- Accession number :
- 26510707
- Full Text :
- https://doi.org/10.1186/s12938-015-0096-3