Back to Search Start Over

Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning

Authors :
Maziar Gomrokchi
Susan Amin
Hossein Aboutalebi
Alexander Wong
Doina Precup
Source :
IEEE Access, Vol 11, Pp 42796-42808 (2023)
Publication Year :
2023
Publisher :
IEEE, 2023.

Abstract

While significant research advances have been made in the field of deep reinforcement learning, there have been no concrete adversarial attack strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms to membership inference attacks. In such attacking systems, the adversary targets the set of collected input data on which the deep reinforcement learning algorithm has been trained. To address this gap, we propose an adversarial attack framework designed for testing the vulnerability of a state-of-the-art deep reinforcement learning algorithm to a membership inference attack. In particular, we design a series of experiments to investigate the impact of temporal correlation, which naturally exists in reinforcement learning training data, on the probability of information leakage. Moreover, we compare the performance of collective and individual membership attacks against the deep reinforcement learning algorithm. Experimental results show that the proposed adversarial attack framework is surprisingly effective at inferring data with an accuracy exceeding 84% in individual and 97% in collective modes in three different continuous control Mujoco tasks, which raises serious privacy concerns in this regard. Finally, we show that the learning state of the reinforcement learning algorithm influences the level of privacy breaches significantly.

Details

Language :
English
ISSN :
21693536
Volume :
11
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.58af4099b024a45a16da109243a9b68
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2023.3270860