Advances in Neural Information Processing Systems 19 2007
DOI: 10.7551/mitpress/7503.003.0012
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Methods for Privacy Preserving Face Detection

Abstract: Bob offers a face-detection web service where clients can submit their images for analysis. Alice would very much like to use the service, but is reluctant to reveal the content of her images to Bob. Bob, for his part, is reluctant to release his face detector, as he spent a lot of time, energy and money constructing it. Secure Multi-Party computations use cryptographic tools to solve this problem without leaking any information. Unfortunately, these methods are slow to compute and we introduce a couple of mac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…We examine fairness performance through three different perspectives. Previous research [ 20 ] has indicated that differentially private machine learning models tend to perform worse on minority groups. To this point we evaluate the decay in accuracy for the different subgroups in the protected attribute.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We examine fairness performance through three different perspectives. Previous research [ 20 ] has indicated that differentially private machine learning models tend to perform worse on minority groups. To this point we evaluate the decay in accuracy for the different subgroups in the protected attribute.…”
Section: Resultsmentioning
confidence: 99%
“…By performance, we mean not only the utility of the model (its accuracy, for example) but also how well the model performs for different subgroups of the dataset-the fairness of the model. The impact of machine learning models on minorities subgroups is an active area of research, and several works have investigated the trade-offs among model accuracy, bias, and privacy [19][20][21][22]. However, only recently bias caused by the use of synthetic data in downstream classification received attention [9,23,24].…”
Section: Introductionmentioning
confidence: 99%
“…The performance of the PoolDiv mechanism is on par with traditional DP approaches for low-dimensional data regression and excels in high-dimensional data synthesis. [21, 22, 23]…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning models optimize an objective function over a set of arguments, meaning that any decisions taken in preprocessing and model construction can affect the capabilities of the system as a whole, and propagate subjective choices throughout ostensibly objective models (Hooker, 2021). For instance, several studies have examined algorithmic biases against underrepresented and/or marginalised groups (Bagdasaryan et al, 2019; Buolamwini & Gebru, 2018; Diakopoulos, 2015). Aside from domain‐specific benefits to code sharing, the larger scientific community has recently shifted towards open science frameworks, with several high‐profile journals requiring methodological transparency (Eglen et al, 2017; Stodden, 2011; Nature editorial policies, 2021; Science editorial policies, 2021).…”
Section: Discussionmentioning
confidence: 99%