2023
DOI: 10.1016/j.eswa.2022.119170
|View full text |Cite
|
Sign up to set email alerts
|

Efficiently generating sentence-level textual adversarial examples with Seq2seq Stacked Auto-Encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 78 publications
0
2
0
Order By: Relevance
“…(2) Physical attacks 16 19 generate adversarial perturbations on physically existing objects. The most prominent existing physical attack, patch attacks, 16 , 20 are pasting adversarial patches or disguises onto the target object for the attack. Another type of physical attack 17 , 19 is projecting or emitting light or laser onto the target object for the attack.…”
Section: Introductionmentioning
confidence: 99%
“…(2) Physical attacks 16 19 generate adversarial perturbations on physically existing objects. The most prominent existing physical attack, patch attacks, 16 , 20 are pasting adversarial patches or disguises onto the target object for the attack. Another type of physical attack 17 , 19 is projecting or emitting light or laser onto the target object for the attack.…”
Section: Introductionmentioning
confidence: 99%
“…Auto encoder [18], which was introduced by Hinton and Salakhutdinov in 2006, is a neural network that can be stacked to form a deep architecture to perform nonlinear dimensionality reduction in raw data. Nowadays, it has been applied successfully in many areas, such as images [19], text [20], and audio [21]. An automatic encoder can be used to learn more useful and complex representations in neural networks with local denoising criteria [22,23].…”
Section: Introductionmentioning
confidence: 99%