2020
DOI: 10.48550/arxiv.2010.03671
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Newaz et al [61] investigated adversarial attacks to machine learning-based smart healthcare systems, consisting of 10 vital signs, e.g., EEG, ECG, SpO 2 , respiration, blood pressure, blood glucose, blood hemoglobin, etc. They performed both targeted and non-targeted attacks, and both poisoning and evasion attacks.…”
Section: B Adversarial Attacks In Health Informaticsmentioning
confidence: 99%
“…Newaz et al [61] investigated adversarial attacks to machine learning-based smart healthcare systems, consisting of 10 vital signs, e.g., EEG, ECG, SpO 2 , respiration, blood pressure, blood glucose, blood hemoglobin, etc. They performed both targeted and non-targeted attacks, and both poisoning and evasion attacks.…”
Section: B Adversarial Attacks In Health Informaticsmentioning
confidence: 99%
“…In addition, leveraging a man-in-the-middle (MitM) attack in the open communication channel of a BIoT-based model, an attacker can sniff and tamper sensor measurements. Newaz et al demonstrated that adversarial machine learning based attack generation technique can be adopted for compromising sensor measurements without alarming the system [19]. As the sensor measurements of the BIoT system can be altered due to adversarial intent or sensor faults, the controller undertakes several security measures.…”
Section: E Biot Hvac Control Attack Modelmentioning
confidence: 99%
“…However, it is highly susceptible to data poisoning attacks [115], [116], and in some settings, gain greater degrees of reliability so that they are hard to be identified [117], [118]. In a very sensitive field of study, an experiment on around 17,000 records of healthy, unhealthy (disease-infected) people, a poisoning attack on the training data was able to drop the classifier accuracy of about 28% of its original accuracy by poising 30% of the data [104]. This could have severe consequences, for example, on dosage or treatment management.…”
Section: Food Safetymentioning
confidence: 99%