2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01485
|View full text |Cite
|
Sign up to set email alerts
|

Towards Data-Free Model Stealing in a Hard Label Setting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(19 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Based on Adversarial DFKD, recent works MAZE (Kariyappa, Prakash, and Qureshi 2021) and DFME (Truong et al 2021) utilized gradient estimation techniques to achieve DFMS, which require the target model to return soft labels. Therefore, DFMS-HL (Sanyal, Addepalli, and Babu 2022) extended the problem to the hard-label setting. However, it still needs to use a proxy dataset or a synthetic dataset of random shapes generated on colored backgrounds, which breaks the truly datafree setting.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Based on Adversarial DFKD, recent works MAZE (Kariyappa, Prakash, and Qureshi 2021) and DFME (Truong et al 2021) utilized gradient estimation techniques to achieve DFMS, which require the target model to return soft labels. Therefore, DFMS-HL (Sanyal, Addepalli, and Babu 2022) extended the problem to the hard-label setting. However, it still needs to use a proxy dataset or a synthetic dataset of random shapes generated on colored backgrounds, which breaks the truly datafree setting.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike previous work (Sanyal, Addepalli, and Babu 2022;Beetham et al 2023), which uses M T as a discriminator and plays a min-max game, we decouple the training process of the generator and the training process of M C into two stages: 1) Substitute Data Generation and 2) Clone Model Training. In the first stage, we train a generator to synthesize substitute data to approximate the distribution of the target data and store them in a memory bank.…”
Section: Overviewmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, not all models deployed on MLaaS systems possess the capability to furnish comprehensive predictive results. Most MLaaS systems [57,78] merely offer categorical labels for samples (e.g., the image depicts a cat or a dog, rather than the probabilities of the image belonging to various categories). These challenges force us to urgently study more practical model stealing attacks for graph classification.…”
Section: Introductionmentioning
confidence: 99%