2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2018
DOI: 10.1109/embc.2018.8512480
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Sleep Stage Classification Using Single-Channel EEG: Learning Sequential Features with Attention-Based Recurrent Neural Networks

Abstract: The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
92
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 112 publications
(96 citation statements)
references
References 13 publications
4
92
0
Order By: Relevance
“…Here, we employ a bidirectional RNN coupled with the attention mechanism [48], [49] to learn sequential features for epoch representation. Due to the RNN's sequential modelling capability, it is expected to capture temporal dynamics of input signals to produce good features [24]. For convenience, we interpret the image X after the filterbank layers as a sequence of T feature vectors X ≡ (…”
Section: Short-term Sequential Modellingmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we employ a bidirectional RNN coupled with the attention mechanism [48], [49] to learn sequential features for epoch representation. Due to the RNN's sequential modelling capability, it is expected to capture temporal dynamics of input signals to produce good features [24]. For convenience, we interpret the image X after the filterbank layers as a sequence of T feature vectors X ≡ (…”
Section: Short-term Sequential Modellingmentioning
confidence: 99%
“…To this end, prior works can be grouped into one-to-one, many-to-one, one-to-many schemes as illustrated in Figure 1 (a)-(c), respectively. Following the one-to-one scheme, a classification model receives a single PSG epoch as input at a time and produces a single corresponding output label [14], [15], [24], [26]. Although being straightforward, this classification scheme cannot take into account the existing dependency between PSG epochs [4], [8], Figure 1: Illustration of the classification schemes used for automatic sleep staging.…”
Section: Introductionmentioning
confidence: 99%
“…finetuning entire network). [12] No 81.9 0.740 73.8 73.9 95.0 1-max CNN [17] No 79.8 0.720 72.0 − − Atentional RNN [23] No 79.1 0.700 69.8 − − Deep auto-encoder [14] No 78.9 − 73.3 − − Deep CNN [13] No The finetuning results also unveil that finetuning the softmax layer alone is not sufficient to overcome the channel-mismatch obstacle. Instead, it is important to additionally finetune those feature-learning layers, either the ARNN subnetwork for epoch-level feature learning or the SeqRNN for sequence-level feature learning or both collectively.…”
Section: Network Parametersmentioning
confidence: 99%
“…We also compared DeepSleep with recent state-of-the-art methods in sleep stage scoring. These methods extracted features from 30-second epochs through short-time Fourier transform (STFT) 27,28 or Thomson’s multitaper 25,29 . They were originally designed for automatic sleep staging and we applied them to the task of detecting sleep arousals on the same 2018 PhysioNet data.…”
Section: Resultsmentioning
confidence: 99%