2022
DOI: 10.21203/rs.3.rs-2261000/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning

Abstract: While significant research advances have been made in the field of deep reinforcement learning, there have been no concreteadversarial attack strategies in literature tailored for studying the vulnerability of deep reinforcement learning algorithms tomembership inference attacks. In such attacking systems, the adversary targets the set of collected input data on which thedeep reinforcement learning algorithm has been trained. To address this gap, we propose an adversarial attack frameworkdesigned for testing t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…Membership Inference Attack against RL [46], [23], [22]. Several membership inference attacks exist against DRL, which seem to address the problem studied in this paper.…”
Section: B Existing Solutionsmentioning
confidence: 99%
See 3 more Smart Citations
“…Membership Inference Attack against RL [46], [23], [22]. Several membership inference attacks exist against DRL, which seem to address the problem studied in this paper.…”
Section: B Existing Solutionsmentioning
confidence: 99%
“…Competitors. Recalling Section III-B, existing methods [46], [23], [22] are designed for the online reinforcement learning scenes, assuming that the auditor can continuously interact with the environment to obtain new data as the non-member example. Based on the behavioral difference of the model between the member examples and the non-member examples, they build the member inference method to detect whether an example is used to train the suspect model.…”
Section: A Experimental Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…By using a binary classifier, we train labeling agents only for the environment map with which the MIA is faced. In another work, the authors of [18] develop an MIA that infers the membership of a batch-constrained deep Q-learning agent's roll-out trajectories stored in its replay buffer. We do not follow the above work's methodology because we do not restrict the algorithm that is used to train the reinforcement learning agents.…”
Section: Connection With the Existing Miasmentioning
confidence: 99%