2021
DOI: 10.1016/j.patcog.2021.107830
|View full text |Cite
|
Sign up to set email alerts
|

Copycat CNN: Are random non-Labeled data enough to steal knowledge from black-box models?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…We train models with different α for different datasets like the experiments above and set them up as blackbox victim models. Inspired by Correia-Silva [13], we use the designed Copycat DNN as the substitute model to extract trained models protected by our proposed scheme. The method aims at copying a target network into a substitute model by only performing queries.…”
Section: D) Model Extraction Attacksmentioning
confidence: 99%
See 3 more Smart Citations
“…We train models with different α for different datasets like the experiments above and set them up as blackbox victim models. Inspired by Correia-Silva [13], we use the designed Copycat DNN as the substitute model to extract trained models protected by our proposed scheme. The method aims at copying a target network into a substitute model by only performing queries.…”
Section: D) Model Extraction Attacksmentioning
confidence: 99%
“…2) Infringement against DNN Model Algorithms: To date, various malicious attacks on DNN model algorithm have been proposed [13], [40], [42], [29], [55]. In these model illegitimate reproducing attacks, the attacker A aims to steal a DNN model F V of a victim set V or extract the targeted model parameters by making a series of unauthorized queries Q to F V and obtaining corresponding predictions F V (Q) in a black-box setting.…”
Section: B Infringement On Dnn Servicesmentioning
confidence: 99%
See 2 more Smart Citations
“…Although the CNNs and their variants have been successfully applied in numerous tasks, they contribute little to the understanding of the principle of neuronal computation because they are black-box models [29,30]. It is worth mentioning that the impressive results of CNNs are highly dependent on intensive pools of data, which indicates that their major strength is contributed by the availability of massive datasets [31]. However, the cost of accessing and annotating data is undoubtedly high due to the need for a large amount of data to increase the reliability of the computational results.…”
Section: Introductionmentioning
confidence: 99%