2022
DOI: 10.1016/j.heliyon.2022.e11209
|View full text |Cite
|
Sign up to set email alerts
|

Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 16 publications
0
1
0
Order By: Relevance
“…In general, adversarial training improves deep neural networks' (DNNs') intrinsic robustness without adding any extraneous parts, while preserving its ability to make accurate inferences from valid data. Different protection strategies focus on pre-processing data (both clean and hostile instances) without influencing later computer-aided analysis networks, strengthening the intrinsic network resistance against adversarial components [20,46]. In simple terms, the data pre-processing aims to retain the original form of clean inputs while converting hostile samples into benign equivalents for future inference.…”
Section: Medical Image Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…In general, adversarial training improves deep neural networks' (DNNs') intrinsic robustness without adding any extraneous parts, while preserving its ability to make accurate inferences from valid data. Different protection strategies focus on pre-processing data (both clean and hostile instances) without influencing later computer-aided analysis networks, strengthening the intrinsic network resistance against adversarial components [20,46]. In simple terms, the data pre-processing aims to retain the original form of clean inputs while converting hostile samples into benign equivalents for future inference.…”
Section: Medical Image Analysismentioning
confidence: 99%
“…Furthermore, a comprehensive metric is proposed to determine the confidence level of MedRDF diagnosis, aiding healthcare professionals in their clinical practice. Kansal et al [46] expanded upon the High-level representation Guided Denoiser (HGD) [118] to defend medical picture applications against hostile examples in both white-box and black-box scenarios instead of focusing just on pixel-level denoising. The incorporation of high-level information can enhance the process of eliminating the adversarial effect at the image level, leading to a more accurate final diagnosis without causing visual disruption.…”
Section: Image Level Preprocessingmentioning
confidence: 99%
“…While several researchers have focused on image-based AE, audio and speech-based AE are gaining popularity. In the work, [11], the adversarial robustness of such Covid-19 classifiers has been proposed using common adversarial attacks, such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD).…”
Section: Introductionmentioning
confidence: 99%