1. Visual Speech Recognition with Lightweight Psychologically Motivated Gabor Features
- Author
-
Xuejie Zhang, Chengxiang Gao, Amir Hussain, Andrew Abel, Roger Watt, Yan Xu, and Leslie S. Smith
- Subjects
Computer science ,Speech recognition ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Physics and Astronomy ,lcsh:Astrophysics ,Image processing ,02 engineering and technology ,Article ,Image (mathematics) ,Domain (software engineering) ,030507 speech-language pathology & audiology ,03 medical and health sciences ,lcsh:QB460-466 ,gabor features ,0202 electrical engineering, electronic engineering, information engineering ,Discrete cosine transform ,explainable ,lip reading ,lcsh:Science ,speech recognition ,Grid ,lcsh:QC1-999 ,image processing ,Key (cryptography) ,lcsh:Q ,020201 artificial intelligence & image processing ,0305 other medical science ,lcsh:Physics ,Curse of dimensionality - Abstract
Extraction of relevant lip features is of continuing interest in the visual speech domain. Using end-to-end feature extraction can produce good results, but at the cost of the results being difficult for humans to comprehend and relate to. We present a new, lightweight feature extraction approach, motivated by human-centric glimpse-based psychological research into facial barcodes, and demonstrate that these simple, easy to extract 3D geometric features (produced using Gabor-based image patches), can successfully be used for speech recognition with LSTM-based machine learning. This approach can successfully extract low dimensionality lip parameters with a minimum of processing. One key difference between using these Gabor-based features and using other features such as traditional DCT, or the current fashion for CNN features is that these are human-centric features that can be visualised and analysed by humans. This means that it is easier to explain and visualise the results. They can also be used for reliable speech recognition, as demonstrated using the Grid corpus. Results for overlapping speakers using our lightweight system gave a recognition rate of over 82%, which compares well to less explainable features in the literature.
- Published
- 2020
- Full Text
- View/download PDF