2022
DOI: 10.1109/taslp.2021.3130970
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Combinatorial Optimization for Word-Level Adversarial Textual Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…Zang et al 13 adopt a particle swarm optimization algorithm to search for possible perturbations, which are obtained by substituting the synonyms with the same sememe of the original word. Liu et al 14 utilize a local search algorithm to search for possible perturbations, which are generated by substituting the synonyms from the WordNet, HowNet, or word embedding space, and so on. Jin et al 33 propose a strong baseline by ranking words based on the logit output of the target model on the gold label and replacing them with synonyms from a semantically enhanced embedding 50 in a greedy way.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Zang et al 13 adopt a particle swarm optimization algorithm to search for possible perturbations, which are obtained by substituting the synonyms with the same sememe of the original word. Liu et al 14 utilize a local search algorithm to search for possible perturbations, which are generated by substituting the synonyms from the WordNet, HowNet, or word embedding space, and so on. Jin et al 33 propose a strong baseline by ranking words based on the logit output of the target model on the gold label and replacing them with synonyms from a semantically enhanced embedding 50 in a greedy way.…”
Section: Related Workmentioning
confidence: 99%
“…The discrete nature of texts leads to that the adversarial attacks of texts are more challenging than the ones of image and speech. The existing textual adversarial attacks can be roughly divided into three types according to the attack granularity, namely charlevel, [6][7][8][9][10][11] word-level, [12][13][14][15][16][17][18][19][20][21][22][23] and sentence-level. [24][25][26][27][28][29][30] The methods belonging to each type can be further divided into two categories: white-box and black-box.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Score-based attacks leverage the prediction probabilities of the target model to guide the search over the space of adversarial examples. Specifically, they are mostly based on wordlevel substitution [7]- [12], [20], [21] that crafts adversarial examples by replacing words in the original input, which can be formulated as a combinatorial optimization problem. Previous studies have used various methods to solve this problem, including population-based search algorithms [7], [10], greedy algorithm [8], [9], [11], [12], and Metropolis-Hastings sampling [20].…”
Section: Related Workmentioning
confidence: 99%
“…Deep neural networks (DNNs) have achieved significant progress in wide applications, such as image classification [9], face recognition [27], object detection [28], speech recognition [14] and machine translation [3]. Despite their success, deep learning models have exhibited vulnerability to adversarial attacks [12,20,21,36]. Crafted by adding some small perturbations to benign inputs, adversarial examples (AEs) can fool DNNs into making wrong predictions, which is a critical threat especially for some security-sensitive scenarios such as autonomous driving [34].…”
Section: Introductionmentioning
confidence: 99%