2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA) 2020
DOI: 10.1109/memea49120.2020.9137134
|View full text |Cite
|
Sign up to set email alerts
|

An Accurate EEGNet-based Motor-Imagery Brain–Computer Interface for Low-Power Edge Computing

Abstract: This paper presents an accurate and robust embedded motor-imagery brain-computer interface (MI-BCI). The proposed novel model, based on EEGNet [1], matches the requirements of memory footprint and computational resources of low-power microcontroller units (MCUs), such as the ARM Cortex-M family. Furthermore, the paper presents a set of methods, including temporal downsampling, channel selection, and narrowing of the classification window, to further scale down the model to relax memory requirements with neglig… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
65
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 80 publications
(68 citation statements)
references
References 24 publications
0
65
0
3
Order By: Relevance
“…Interestingly, variable EEG-TCNET requires, in general, a smaller number of temporal filters F 1 and filter size K E than EEGNet. The temporal filters pose the most restrictive limitations in terms of computational complexity and memory footprint since the temporal convolution requires the vast majority of MACs and memory to store the resulting feature maps [19]. Therefore, variable EEG-TCNET has more potential to be embedded on a resource-limited device.…”
Section: B Bci Competition Iv-2amentioning
confidence: 99%
See 2 more Smart Citations
“…Interestingly, variable EEG-TCNET requires, in general, a smaller number of temporal filters F 1 and filter size K E than EEGNet. The temporal filters pose the most restrictive limitations in terms of computational complexity and memory footprint since the temporal convolution requires the vast majority of MACs and memory to store the resulting feature maps [19]. Therefore, variable EEG-TCNET has more potential to be embedded on a resource-limited device.…”
Section: B Bci Competition Iv-2amentioning
confidence: 99%
“…TPCT's overall memory footprint is 8.304 MB, and thus far beyond the on-chip memory capacity available in an ARM M7 processor. Its compute effort of 1.73 GMACs would take approximately 50 s/inference-17× below real-time when requiring a new classification at least every 3 s-where we refer to the throughput of 34.45 MMAC/s of an ARM M7 processor [19]. In comparison, the proposed EEG-TCNet has a memory footprint of 400 kB, and its compute effort of 6.8 MMACs would take approximately 197 ms. EEG-TCNET and variable EEG-TCNET are the best candidates for an embedded implementation; both parameter count and inference cost are kept reasonable while still achieving very high accuracy scores.…”
Section: B Bci Competition Iv-2amentioning
confidence: 99%
See 1 more Smart Citation
“…In [14], EEGNET was applied to the Physionet Motor Movement/Imagery Dataset, achieving SoA accuracy. The model was quantized and ported to an ARM Cortex-M7 using CUBE.AI, i.e., the X-CUBE-AI expansion package of STM32CubeMX.…”
Section: Related Workmentioning
confidence: 99%
“…Nevertheless, both Cortex-M and RISC-V based MCU platforms are tightly constrained both in memory and compute resources, which forced other embedded solutions to tailor and scale down EEGNET for the target system resulting in lower classification accuracy [14]. To address this challenge, we present Q-EEGNET, an adapted and quantized EEGNET [7] with algorithmic and implementation optimizations to execute BMI inference on resource-limited edge devices.…”
Section: Introductionmentioning
confidence: 99%