Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/419
|View full text |Cite
|
Sign up to set email alerts
|

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

Abstract: Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model. Knowing this may indeed lead to a privacy breach. Most MIAs, however, make use of the model's prediction scores - the probability of each output given some input - following the intuition that the trained model tends to behave differently on its training data. We argue that this is a fallacy for many modern deep network architectures. Consequently, MIAs will miserably fail since overconfidence l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 69 publications
0
3
0
Order By: Relevance
“…Specifically, determining the conditions under which robust learning is possible, and to what extent, is one of the major problems in the field. In this regard, [128] have made several important contributions. They showed that robust learning is impossible in the distribution-free setting even when the adversary is restricted to perturbing just a single bit of the input.…”
Section: Hardness Resultsmentioning
confidence: 99%
“…Specifically, determining the conditions under which robust learning is possible, and to what extent, is one of the major problems in the field. In this regard, [128] have made several important contributions. They showed that robust learning is impossible in the distribution-free setting even when the adversary is restricted to perturbing just a single bit of the input.…”
Section: Hardness Resultsmentioning
confidence: 99%
“…In this section, we show that the amount of data needed to ρ-robustly learn conjunctions under the uniform distribution has an exponential dependence on the adversary's budget ρ when the learner only has access to the EX and LMQ oracles. Here, the lower bound on the sample drawn from the example oracle is 2 ρ , which is the same as the lower bound for monotone conjunctions derived in Gourdeau et al (2022), and the local membership query lower bound is 2 ρ−1 . The result relies on showing there there exists a family of conjunctions that remain indistinguishable from each other on any sample of size 2 ρ and any sequence of 2 ρ−1 LMQs with constant probability.…”
Section: A Local Membership Query Lower Bound For Conjunctionsmentioning
confidence: 85%
“…We use the term robustness threshold from Gourdeau et al (2021) to denote an adversarial budget function ρ : N → R of the input dimension n such that, if the adversary is allowed perturbations of magnitude ρ(n), then there exists a sample-efficient ρ(n)-robust learning algorithm, and if the adversary's budget is ω(ρ(n)), then there does not exist such an algorithm. Robustness thresholds are distribution-dependent when the learner only has access to the example oracle EX, as seen in (Gourdeau et al, 2021(Gourdeau et al, , 2022. Now, since the local membership query lower bound above has an exponential dependence on ρ, any perturbation budget ω(log n) will require a sample and query complexity that is superpolynomial in n, giving the following corollary.…”
Section: A Local Membership Query Lower Bound For Conjunctionsmentioning
confidence: 98%
See 1 more Smart Citation
“…A natural question then arises as to whether one can learn a model that is guaranteed to be robust. Building on the positive and negative theoretical results in the case of robust learning against evasion attacks [58], [59], [60], [61], it would be interesting to generalise these results to neural network models and development of implementable frameworks that can provide provable guarantees on robustness.…”
Section: Future Challengesmentioning
confidence: 99%