2023
DOI: 10.1088/1741-2552/acbfdf
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal motor imagery decoding method based on temporal spatial feature alignment and fusion

Abstract: Objective: A motor imagery-based brain-computer interface (MI-BCI) translates spontaneous movement intention from the brain to outside devices. MI-BCI systems based on a single modality have been widely researched in recent decades. Lately, along with the development of neuroimaging methods, multimodal MI-BCI studies that use multiple neural signals have been proposed, which are promising for enhancing the decoding accuracy of MI-BCI. Multimodal MI data contain rich common and complementary information. Effect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 64 publications
0
1
0
Order By: Relevance
“…Roy et al [17] proposed an efficient multiscale CNN (MS-CNN) with a classification accuracy of 93.74% on the BCI Competition IV-2a dataset. Zhang et al [18] designed a five-class motor imagery task involving imagining actions for the left hand, right hand, both hands, both feet, and rest. They concurrently collected EEG and fNIRS data.…”
Section: Introductionmentioning
confidence: 99%
“…Roy et al [17] proposed an efficient multiscale CNN (MS-CNN) with a classification accuracy of 93.74% on the BCI Competition IV-2a dataset. Zhang et al [18] designed a five-class motor imagery task involving imagining actions for the left hand, right hand, both hands, both feet, and rest. They concurrently collected EEG and fNIRS data.…”
Section: Introductionmentioning
confidence: 99%