2023
DOI: 10.1007/978-3-031-26409-2_13
|View full text |Cite
|
Sign up to set email alerts
|

Adversarially Robust Decision Tree Relabeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…The first is a pre-processing procedure that partitions the features between the trees in the ensemble in such a way that it becomes impossible to ever trick the majority of the trees (Calzavara et al, 2021). The second is a post-processing procedure that relabels the leaves of the ensemble to make it more difficult to find neighboring leaves that predict different classes (Vos and Verwer, 2023). Chen et al (2019a), Verwer (2021), andChen et al (2021) propose changes to the splitting procedure.…”
Section: Improving Robustnessmentioning
confidence: 99%
“…The first is a pre-processing procedure that partitions the features between the trees in the ensemble in such a way that it becomes impossible to ever trick the majority of the trees (Calzavara et al, 2021). The second is a post-processing procedure that relabels the leaves of the ensemble to make it more difficult to find neighboring leaves that predict different classes (Vos and Verwer, 2023). Chen et al (2019a), Verwer (2021), andChen et al (2021) propose changes to the splitting procedure.…”
Section: Improving Robustnessmentioning
confidence: 99%
“…They provide intuitive insights into how a model arrives at its conclusions, making them valuable for both analysis and application. However, DT models are susceptible to adversarial examples (Vos and Verwer 2021;Chen et al 2019), where small, carefully crafted perturbations can lead to significant misclassifications. Moreover, DT models can be exploited in both white-box and blackbox settings where the attacker either has full knowledge or no knowledge of the models parameters (Chen et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Although many works have suggested ways for making DT models robust against adversarial attacks (Vos and Verwer 2022b;Ranzato and Zanella 2021;Calzavara et al 2020;Yang et al 2020;Guo et al 2022;Vos and Verwer 2022a) all of these solutions modify the model or the training process. This is a disadvantage for several reasons: (1) changing model parameters or the values of the node's threshold in case of DT, often harm a model's performance on clean data (Andriushchenko and Hein 2019;Chen et al 2019;Vos and Verwer 2021), (2) these defences cannot be applied to deployed models, and (3) some of these methods cannot be applied to every kind of DT which limits a developer's options. This holds particular significance for mission-critical systems.…”
Section: Introductionmentioning
confidence: 99%
“…First, verification techniques attempt to ascertain how robust a learned ensemble is to adversarial examples [9,12,34] by empirically determining how much an example would have to be perturbed (according to some norm) for its predicted label to change. Second, the problem can be addressed at training time by trying to learn a more robust model by adding adversarial examples to the training set [23], pruning the training data [49], changing aspects of the learner such as the splitting criteria [1,7,8,44] or the objective [21], relabeling the values in the leaves [45], using constraint solvers to learn optimal trees [46], or interleaving learning and verification [35].…”
Section: Introductionmentioning
confidence: 99%