Back to Search Start Over

Cross-Modal Federated Human Activity Recognition via Modality-Agnostic and Modality-Specific Representation Learning

Authors :
Xiaoshan Yang
Baochen Xiong
Yi Huang
Changsheng Xu
Source :
Proceedings of the AAAI Conference on Artificial Intelligence. 36:3063-3071
Publication Year :
2022
Publisher :
Association for the Advancement of Artificial Intelligence (AAAI), 2022.

Abstract

In this paper, we propose a new task of cross-modal federated human activity recognition (CMF-HAR), which is conducive to promote the large-scale use of the HAR model on more local devices. To address the new task, we propose a feature-disentangled activity recognition network (FDARN), which has five important modules of altruistic encoder, egocentric encoder, shared activity classifier, private activity classifier and modality discriminator. The altruistic encoder aims to collaboratively embed local instances on different clients into a modality-agnostic feature subspace. The egocentric encoder aims to produce modality-specific features that cannot be shared across clients with different modalities. The modality discriminator is used to adversarially guide the parameter learning of the altruistic and egocentric encoders. Through decentralized optimization with a spherical modality discriminative loss, our model can not only generalize well across different clients by leveraging the modality-agnostic features but also capture the modality-specific discriminative characteristics of each client. Extensive experiment results on four datasets demonstrate the effectiveness of our method.

Subjects

Subjects :
General Medicine

Details

ISSN :
23743468 and 21595399
Volume :
36
Database :
OpenAIRE
Journal :
Proceedings of the AAAI Conference on Artificial Intelligence
Accession number :
edsair.doi...........7b7803bfde04383eb3ceda4ea769243d