Back to Search Start Over

Robust Federated Learning Against Adversarial Attacks for Speech Emotion Recognition

Authors :
Chang, Yi
Laridi, Sofiane
Ren, Zhao
Palmer, Gregory
Schuller, Björn W.
Fisichella, Marco
Publication Year :
2022

Abstract

Due to the development of machine learning and speech processing, speech emotion recognition has been a popular research topic in recent years. However, the speech data cannot be protected when it is uploaded and processed on servers in the internet-of-things applications of speech emotion recognition. Furthermore, deep neural networks have proven to be vulnerable to human-indistinguishable adversarial perturbations. The adversarial attacks generated from the perturbations may result in deep neural networks wrongly predicting the emotional states. We propose a novel federated adversarial learning framework for protecting both data and deep neural networks. The proposed framework consists of i) federated learning for data privacy, and ii) adversarial training at the training stage and randomisation at the testing stage for model robustness. The experiments show that our proposed framework can effectively protect the speech data locally and improve the model robustness against a series of adversarial attacks.<br />Comment: 11 pages, 6 figures, 3 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.04696
Document Type :
Working Paper