2020 International Conference on Networking and Network Applications (NaNA) 2020
DOI: 10.1109/nana51271.2020.00020
|View full text |Cite
|
Sign up to set email alerts
|

CPWF: Cross-Platform Website Fingerprinting Based on Multi-Similarity Loss

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…Dahanayaka et al [36], [37] analyzed CNNs for WF attacks, and found that they focused mainly on transitions between uploads and downloads in trace fronts, exhibited few-shot learning capabilities, and outperformed RNNs due to their resilience to random shifts in data. Shihao Wang et al [38] presented a cross-platform website fingerprinting (CPWF) attack based on Multi-Similarity Loss which was introduced by deep metric learning to guide the deep learning model to extract effective feature sets. Ramezani et al [39] developed a multi-label classifier based on LSTM, that can predict the websites visited by a user in a certain period, the classifier utilized the server names appearing in chronological order in the TLSv1.2 and TLSv1.3 Client Hello packets as features.…”
Section: A Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…Dahanayaka et al [36], [37] analyzed CNNs for WF attacks, and found that they focused mainly on transitions between uploads and downloads in trace fronts, exhibited few-shot learning capabilities, and outperformed RNNs due to their resilience to random shifts in data. Shihao Wang et al [38] presented a cross-platform website fingerprinting (CPWF) attack based on Multi-Similarity Loss which was introduced by deep metric learning to guide the deep learning model to extract effective feature sets. Ramezani et al [39] developed a multi-label classifier based on LSTM, that can predict the websites visited by a user in a certain period, the classifier utilized the server names appearing in chronological order in the TLSv1.2 and TLSv1.3 Client Hello packets as features.…”
Section: A Approachesmentioning
confidence: 99%
“…When the size of the world was significantly increased, its performance degraded significantly to 30% precision and 70% recall. [26] SDAE, CNN, LSTM DF [27] CNN GRU and ResNet [28] GRU, ResNet-50 p-FP [29] MLP, CNN Cache-based WF [30], [31] CNN, LSTM Var-CNN [32] ResNet-18 Tik-Tok [34] DF 2ch-TCN [35] CNN Realistic WF [109] CNN, LSTM Multi-session WF [39] LSTM Side-channel informationbased WF [41] CNN, LSTM BurNet [43] CNN DNNF [44] CNN GAP-WF [45] GNN Cross-trace WF [46] DF DNN with Blind adversarial training [2] DF DNN with Tripod data augmentation [47] DF, Var-CNN, ResNet-18, ResNet-34, VGG-16, VGG-19 DNN with HDA data augmentation [48] Var-CNN, ResNet-34 Microarchitecture-based WF [49] 1-D CNN BAPM [110] CNN, Self-attention FDF [52] CNN, FC, Self-attention snWF [53] CNN WFD [111] 1-D ResNets DNN with Minipatch adversarial training [54] DF DNN with Bionic data augmentation [55] Var-CNN Semi-supervised learning GANDaLF [40] GAN PAS [114] DCNN, DF, AWF Transfer learning AF [42] Domain adversarial network TLFA [51] CNN, MLP Metric-learning TF [33] Triplet networks CPWF [38] CNN CNN-BiLSTM-based Siamese networks [50] Siamese networks, CNN, LSTM Online WF [56] TF Meta-learning MBL [57] CNN…”
Section: Performancementioning
confidence: 99%