2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8851936
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Deep Neural Networks for Time Series Classification

Abstract: Time Series Classification (TSC) problems are encountered in many real life data mining tasks ranging from medicine and security to human activity recognition and food safety. With the recent success of deep neural networks in various domains such as computer vision and natural language processing, researchers started adopting these techniques for solving time series data mining problems. However, to the best of our knowledge, no previous work has considered the vulnerability of deep learning models to adversa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 73 publications
(38 citation statements)
references
References 29 publications
0
37
0
1
Order By: Relevance
“…Hence, observing the gaps in the works already done gives a fair idea as to what improvements can be done. [13] III. OBJECTIVES AND PROPOSED METHODOLOGY Adversarial machine learning can be used to implement techniques that are cyber attacks by nature,which can be categorised based on the resources available to the attacker: 1.…”
Section: Literature Surveymentioning
confidence: 99%
“…Hence, observing the gaps in the works already done gives a fair idea as to what improvements can be done. [13] III. OBJECTIVES AND PROPOSED METHODOLOGY Adversarial machine learning can be used to implement techniques that are cyber attacks by nature,which can be categorised based on the resources available to the attacker: 1.…”
Section: Literature Surveymentioning
confidence: 99%
“…This highlights that there are only a few works that evaluate the robustness of models by providing a reproducible model of their (natural) perturbations or a (artificially) perturbed dataset that is publicly available. Only 3 studies (i.e., [ 29 , 32 , 40 ]) out of 14 (artificially) generated perturbations to modify the datasets and then perform experiments on the models. Other listed studies do not provide a fault model that is reproducible, or they use datasets that are known as containing some perturbations (noise or missing data) but without identifying the characteristics of these perturbations (making it difficult to reproduce on other datasets for comparison purposes).…”
Section: Background and Related Workmentioning
confidence: 99%
“…In addition, sequential data mining tasks such as natural language processing and speech recognition are being tackled with deep convolutional, recurrent and generative adversarial neural networks [31], [32]. Inspired by this recent success of deep learning models, researchers started adopting these complex machine learning techniques to solve the underlying task of Time Series Classification [19], [33]. Specifically Wang et al [18] showed very promising results, where a Fully Convolutional Neural network (FCN) and a Residual Network (ResNet) were designed to reach COTE's performance when evaluated on 44 datasets from the UCR/UEA archive [7], [20].…”
Section: A Neural Network For Time Series Classificationmentioning
confidence: 99%