2020
DOI: 10.1002/int.22260
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attacks on text classification models using layer‐wise relevance propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…As automatic TC has the realistic significance for efficient management and effective utilization of text information, it has become an active research topic in a variety of domains, such as information retrieval, data mining, and statistical learning. During the past few years, a great number of statistical and machine learning methods have been proposed to address this problem 71,72 …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As automatic TC has the realistic significance for efficient management and effective utilization of text information, it has become an active research topic in a variety of domains, such as information retrieval, data mining, and statistical learning. During the past few years, a great number of statistical and machine learning methods have been proposed to address this problem 71,72 …”
Section: Resultsmentioning
confidence: 99%
“…During the past few years, a great number of statistical and machine learning methods have been proposed to address this problem. 71,72 In this section, the proposed InBLMM SLB is applied as a classifier for the TC task and the experimental results are reported on two publicly available data sets, namely, WebKB and 20Newsgroups. The WebKB is composed of 4199 web pages from four categorizations, namely, course, faculty, project, and student.…”
Section: Text Categorizationmentioning
confidence: 99%
“…Prior research shows that ML-based classifiers are vulnerable to evasion attacks, for example. [12][13][14][15][16][17][18][19] Such attacks have been extensively studied in image recognition and malware detection, but little has done in anti-phishing. This is potentially due to the new challenge in this domain: unlike adversarial images which only need to preserve the appearance and adversarial malware which only need to preserve the functionality, adversarial phishing websites have to preserve appearance and functionalities simultaneously, to be more effective for web phishing.…”
Section: Introductionmentioning
confidence: 99%
“…Prior research shows that ML‐based classifiers are vulnerable to evasion attacks, for example 12–19 . Such attacks have been extensively studied in image recognition and malware detection, but little has done in anti‐phishing.…”
Section: Introductionmentioning
confidence: 99%
“…1 It has become a fundamental component of many computer vision tasks. However, most CNNs are suscept to adversarial examples [2][3][4] which leads to high-confidence misclassification, resulting from small crafted perturbation to the original image. The effect of adversarial examples testifies that CNNs have serious security issues despite their excellent performance.…”
mentioning
confidence: 99%