2021
DOI: 10.48550/arxiv.2102.04291
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Real-time Defense against Website Fingerprinting Attacks

Abstract: Anonymity systems like Tor are vulnerable to Website Fingerprinting (WF) attacks, where a local passive eavesdropper infers the victim's activity. Current WF attacks based on deep learning classifiers have successfully overcome numerous proposed defenses. While recent defenses leveraging adversarial examples offer promise, these adversarial examples can only be computed after the network session has concluded, thus offer users little protection in practical settings.We propose Dolos, a system that modifies use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 66 publications
(114 reference statements)
0
2
0
Order By: Relevance
“…The former generates adversarial textured patches with reinforcement learning, while the latter manipulates real stickers' positions and rotation angles for physical attacks on face recognition. Adversarial patch attacks have also been proposed for other task DNN models, such as object detection [28], [39], [71], semantic segmentation [50], and network traffic analysis [53]. Shapeshifter [8] introduces a physical-world attack on Faster R-CNN object detector [47] by perturbing stop signs.…”
Section: A Adversarial Patch Attacksmentioning
confidence: 99%
“…The former generates adversarial textured patches with reinforcement learning, while the latter manipulates real stickers' positions and rotation angles for physical attacks on face recognition. Adversarial patch attacks have also been proposed for other task DNN models, such as object detection [28], [39], [71], semantic segmentation [50], and network traffic analysis [53]. Shapeshifter [8] introduces a physical-world attack on Faster R-CNN object detector [47] by perturbing stop signs.…”
Section: A Adversarial Patch Attacksmentioning
confidence: 99%
“…There have been adversarial patch attacks proposed in other domains such as object detection [39], semantic segmentation [40], and network traffic analysis [41]. In this paper, we focus on test-time attacks against image classification models.…”
Section: A Adversarial Patch Attacksmentioning
confidence: 99%