Back to Search Start Over

Hierarchical transformer speech depression detection model research based on Dynamic window and Attention merge.

Authors :
Yue, Xiaoping
Zhang, Chunna
Wang, Zhijian
Yu, Yang
Cong, Shengqiang
Shen, Yuming
Zhao, Jinchi
Source :
PeerJ Computer Science; Sep2024, p1-20, 20p
Publication Year :
2024

Abstract

Depression Detection of Speech is widely applied due to its ease of acquisition and imbuing with emotion. However, there exist challenges in effectively segmenting and integrating depressed speech segments. Multiple merges can also lead to blurred original information. These problems diminish the effectiveness of existing models. This article proposes a Hierarchical Transformer model for speech depression detection based on dynamic window and attention merge, abbreviated as DWAM-Former. DWAM-Former utilizes a Learnable Speech Split module (LSSM) to effectively separate the phonemes and words within an entire speech segment. Moreover, the Adaptive Attention Merge module (AAM) is introduced to generate representative feature representations for each phoneme and word in the sentence. DWAM-Former also associates the original feature information with the merged features through a Variable-Length Residual module (VL-RM), reducing feature loss caused by multiple mergers. DWAM-Former has achieved highly competitive results in the depression detection dataset DAIC-WOZ. An MF1 score of 0.788 is received in the experiment, representing a 7.5% improvement over previous research. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23765992
Database :
Complementary Index
Journal :
PeerJ Computer Science
Publication Type :
Academic Journal
Accession number :
180255714
Full Text :
https://doi.org/10.7717/peerj-cs.2348