2021
DOI: 10.48550/arxiv.2101.00157
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Active Learning Under Malicious Mislabeling and Poisoning Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…In our further study, we may apply our proposed consistency check to local model weights for defending against such data poisoning attacks. More precisely, we may follow the consistency check framework proposed in [19].…”
Section: Discussionmentioning
confidence: 99%
“…In our further study, we may apply our proposed consistency check to local model weights for defending against such data poisoning attacks. More precisely, we may follow the consistency check framework proposed in [19].…”
Section: Discussionmentioning
confidence: 99%
“…For recent treatments of this topic and further references, see [36,31]. A few recent works have considered data poisoning in the active learning setting [26,39], with defenses focusing on modifying the setting rather than the algorithm. Further, some existing works in active learning regime [24,14] consider the presence of label noise, out-of-distribution examples, and redundancy in the dataset.…”
Section: Related Workmentioning
confidence: 99%