2020
DOI: 10.1007/978-3-030-68238-5_34
|View full text |Cite
|
Sign up to set email alerts
|

Black-Box Face Recovery from Identity Features

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(23 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…We now discuss each of these and illustrate the feasibility of KD-based attacks. (1) Restricting Access to Training Data: While developers try their best to protect such IP assets, as discussed by [19], there can continue to be numerous reasons for concern: (i) The developers might have bought the data from a vendor who can potentially sell it to others; (ii) Intentionally or not, there is a distinct possibility for data leaks; (iii) Many datasets are either similar to or subsets of large-scale public datasets (ImageNet [32], BDD100k [38], so on), which can be effectively used as proxies; or (iv) Model inversion techniques can be used to recover training data from a pre-trained model [37,43,16,30,3] both in white-box and black box settings. Such methods [16], in fact, do not even require the soft outputs, just the hard predicted label from the model suffices.…”
Section: Feasibility Of Kd Based Stealingmentioning
confidence: 99%
“…We now discuss each of these and illustrate the feasibility of KD-based attacks. (1) Restricting Access to Training Data: While developers try their best to protect such IP assets, as discussed by [19], there can continue to be numerous reasons for concern: (i) The developers might have bought the data from a vendor who can potentially sell it to others; (ii) Intentionally or not, there is a distinct possibility for data leaks; (iii) Many datasets are either similar to or subsets of large-scale public datasets (ImageNet [32], BDD100k [38], so on), which can be effectively used as proxies; or (iv) Model inversion techniques can be used to recover training data from a pre-trained model [37,43,16,30,3] both in white-box and black box settings. Such methods [16], in fact, do not even require the soft outputs, just the hard predicted label from the model suffices.…”
Section: Feasibility Of Kd Based Stealingmentioning
confidence: 99%
“…One obvious threat to face feature vector privacy comes from so-called "feature vector reconstruction" (FVR) techniques. FVR transforms the feature vector v back into the image x [10], [11], [12], [13], [14]. Prior work on inverse biometrics has concluded that reconstruction alone is a severe attack [15], [16].…”
Section: Introductionmentioning
confidence: 99%
“…Parametric reconstruction methods rely on a reconstruction model R trained specifically to invert vectors produced by a model F [13], [10], [20]. Nonparametric reconstruction methods use an iterative optimization process R to reconstruct images from F 's feature vectors [14], [21].…”
Section: Introductionmentioning
confidence: 99%
“…The synthesis is implemented as a gradient ascent algorithm. By contrast, existing blackbox attacks [2,20] are based on training an attack network that predicts the sensitive feature from the input confidence scores. Despite the exclusive focus on these two threat models, in practice, ML models are often packed into a blackbox that only produces hard-labels when being queried.…”
Section: Introductionmentioning
confidence: 99%