Back to Search Start Over

Multi-query multi-head attention pooling and Inter-topK penalty for speaker verification

Authors :
Zhao, Miao
Ma, Yufeng
Ding, Yiwei
Zheng, Yu
Liu, Min
Xu, Minqiang
Publication Year :
2021

Abstract

This paper describes the multi-query multi-head attention (MQMHA) pooling and inter-topK penalty methods which were first proposed in our submitted system description for VoxCeleb speaker recognition challenge (VoxSRC) 2021. Most multi-head attention pooling mechanisms either attend to the whole feature through multiple heads or attend to several split parts of the whole feature. Our proposed MQMHA combines both these two mechanisms and gain more diversified information. The margin-based softmax loss functions are commonly adopted to obtain discriminative speaker representations. To further enhance the inter-class discriminability, we propose a method that adds an extra inter-topK penalty on some confused speakers. By adopting both the MQMHA and inter-topK penalty, we achieved state-of-the-art performance in all of the public VoxCeleb test sets.<br />Comment: submitted to ICASSP 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.05042
Document Type :
Working Paper