2020
DOI: 10.48550/arxiv.2012.10438
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Training of Robust Decision Trees Against Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…Then to compare the practical performance we run the algorithms on eight popular datasets (Chen et al 2019;Vos and Verwer 2020) and varying perturbation radii ( ). All of our experiments ran on 15 Intel Xeon CPU cores and 72 GB of RAM total, where each algorithm ran on a single core.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Then to compare the practical performance we run the algorithms on eight popular datasets (Chen et al 2019;Vos and Verwer 2020) and varying perturbation radii ( ). All of our experiments ran on 15 Intel Xeon CPU cores and 72 GB of RAM total, where each algorithm ran on a single core.…”
Section: Resultsmentioning
confidence: 99%
“…This flexibility comes at a cost in run-time, as it uses an iterative solver to optimize this score for each split it learns. GROOT (Vos and Verwer 2020) improve the greedy procedure by efficiently computing the worst-case Gini impurity and allowing users to specify boxshaped attacker perturbation limits. In this paper, we compare against GROOT and TREANT as these greedy methods achieve state-of-the-art scores.…”
Section: Robust Decision Treesmentioning
confidence: 99%
See 3 more Smart Citations