Through aiding the process of diagnosing cardiovascular diseases (CVD) such as arrhythmia, electrocardiograms (ECGs) have progressively improved prospects for an automated diagnosis system in modern healthcare. Recent years have seen the promising applications of deep neural networks (DNNs) in analyzing ECG data, even outperforming cardiovascular experts in identifying certain rhythm irregularities. However, DNNs have shown to be susceptible to adversarial attacks, which intentionally compromise the models by adding perturbations to the inputs. This concept is also applicable to DNN-based ECG classifiers and the prior works generate these adversarial attacks in a white-box setting where the model details are exposed to the attackers. However, the black-box condition, where the classification model's architecture and parameters are unknown to the attackers, remains mostly unexplored. Thus, we aim to fool ECG classifiers in the black-box and hard-label setting where given an input, only the final predicted category is visible to the attacker. Our attack on the DNN classification model for the PhysioNet Computing in Cardiology Challenge 2017 [12] database produced ECG data sets mostly indistinguishable from the whitebox version of an adversarial attack on this same database. Our results demonstrate that we can effectively generate the adversarial ECG inputs in this black-box setting, which raises significant concerns regarding the potential applications of DNN-based ECG classifiers in security-critical systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.