1. Contrastive Adversarial Domain Adaptation Networks for Speaker Recognition
- Author
-
Man-Wai Mak, Jen-Tzung Chien, and Longxin Li
- Subjects
Series (mathematics) ,Computer Networks and Communications ,Computer science ,Speech recognition ,Feature extraction ,Posterior probability ,Recognition, Psychology ,02 engineering and technology ,Space (commercial competition) ,Speaker recognition ,Computer Science Applications ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Learning ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Software - Abstract
Domain adaptation aims to reduce the mismatch between the source and target domains. A domain adversarial network (DAN) has been recently proposed to incorporate adversarial learning into deep neural networks to create a domain-invariant space. However, DAN's major drawback is that it is difficult to find the domain-invariant space by using a single feature extractor. In this article, we propose to split the feature extractor into two contrastive branches, with one branch delegating for the class-dependence in the latent space and another branch focusing on domain-invariance. The feature extractor achieves these contrastive goals by sharing the first and last hidden layers but possessing decoupled branches in the middle hidden layers. For encouraging the feature extractor to produce class-discriminative embedded features, the label predictor is adversarially trained to produce equal posterior probabilities across all of the outputs instead of producing one-hot outputs. We refer to the resulting domain adaptation network as ``contrastive adversarial domain adaptation network (CADAN).'' We evaluated the embedded features' domain-invariance via a series of speaker identification experiments under both clean and noisy conditions. Results demonstrate that the embedded features produced by CADAN lead to a 33% improvement in speaker identification accuracy compared with the conventional DAN.
- Published
- 2022
- Full Text
- View/download PDF