2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB) 2017
DOI: 10.1109/spmb.2017.8257015
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning with convolutional neural networks for decoding and visualization of EEG pathology

Abstract: Abstract-We apply convolutional neural networks (ConvNets) to the task of distinguishing pathological from normal EEG recordings in the Temple University Hospital EEG Abnormal Corpus. We use two basic, shallow and deep ConvNet architectures recently shown to decode task-related information from EEG at least as well as established algorithms designed for this purpose. In decoding EEG pathology, both ConvNets reached substantially better accuracies (about 6% better, ≈85% vs. ≈79%) than the only published result … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
75
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 145 publications
(78 citation statements)
references
References 31 publications
2
75
0
1
Order By: Relevance
“…The overall performance level is remarkable when considering the simplicity of the model. Our results demonstrate that a Riemannian model can actually be used to perform end-to-end learning (Schirrmeister et al, 2017) involving nothing but signal filtering and covariance estimation and, importantly, without deep-learning (Roy et al, 2019). When using SSS, performance improves beyond the current benchmark set by the MNE model but probably not because of denoising but rather due to the addition of gradiometer information.…”
Section: Resultsmentioning
confidence: 87%
“…The overall performance level is remarkable when considering the simplicity of the model. Our results demonstrate that a Riemannian model can actually be used to perform end-to-end learning (Schirrmeister et al, 2017) involving nothing but signal filtering and covariance estimation and, importantly, without deep-learning (Roy et al, 2019). When using SSS, performance improves beyond the current benchmark set by the MNE model but probably not because of denoising but rather due to the addition of gradiometer information.…”
Section: Resultsmentioning
confidence: 87%
“…Results reported to date on this dataset are included for comparison. In [16], the author explored various machine-and deep learning algorithms and observed that best performance is obtained when frequency features extracted from the input time-series signal are fed into a convolutional neural - Table 2: Performance comparison of the four deep recurrent neural networks described in Section 3 and results reported in [16] (see CNN-MLP) and [19] (see DeepCNN).…”
Section: Resultsmentioning
confidence: 99%
“…Inspired by successess in time-domain signal classification, we explore recurrent neural network (RNN) architectures using the raw EEG time-series signal as input. This sets us apart from previous publications [15,16,19], in which the authors used both traditional machine learning algorithms such as k-nearest neighbour, random forests, and hidden markov models and modern deep learning techniques such as convolutional neural networks (CNN), however, did not use RNNs for this task.…”
Section: Introductionmentioning
confidence: 99%
“…Nu ber of e)a ples 10 −2 10 −1 10 0 10 1 10 2 10 3 10 4 Ratio (e)a ples/ in) datasets with a much greater number of subjects: [132,160,188,149] all used datasets with at least 250 subjects, while [22] and [49] used datasets with 10,000 and 16,000 subjects, respectively. As explained in Section 3.7.4, the untapped potential of DL-EEG might reside in combining data coming from many different subjects and/or datasets to train a model that captures common underlying features and generalizes better.…”
Section: Subjectsmentioning
confidence: 99%
“…Occlusion sensitivity techniques [92,26,175] use a similar idea, by which the decisions of the network when different parts of the input are occluded are analyzed. [135,211,86,34,87,200,182,122,170,228,164,109,204,85,25] Analysis of activations [212,194,87,83,208,167,154,109] Input-perturbation network-prediction correlation maps [149,191,67,16,150] Generating input to maximize activation [188,144,160,15] Occlusion of input [92,26,175] Several studies used backpropagation-based techniques to generate input maps that maximize activations of specific units [188,144,160,15]. These maps can then be used to infer the role of specific neurons, or the kind of input they are sensitive to.…”
Section: Inspection Of Trained Modelsmentioning
confidence: 99%