2021
DOI: 10.48550/arxiv.2101.12037
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data

Abstract: Deep neural networks (DNNs) used for brain-computer-interface (BCI) classification are commonly expected to learn general features when trained across a variety of contexts, such that these features could be fine-tuned to specific contexts. While some success is found in such an approach, we suggest that this interpretation is limited and an alternative would better leverage the newly (publicly) available massive EEG datasets. We consider how to adapt techniques and architectures used for language modelling (L… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…Semi-supervised learning methods use both labeled and unlabeled data, usually to learn the structure of the training data to become able to generate more (artificial) training points , that are used for conventional supervised learning in a second learning phase. Self-supervised learning (Jing and Tian 2019) is a similar approach that is used to learn the relevant structure in EEG data by first learning an unsupervised pretext task, after which the model is further trained on the target task with labeled data (Banville et al 2020, Kostas et al 2021. The remainder of this review will focus on supervised learning methods.…”
Section: Machine Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Semi-supervised learning methods use both labeled and unlabeled data, usually to learn the structure of the training data to become able to generate more (artificial) training points , that are used for conventional supervised learning in a second learning phase. Self-supervised learning (Jing and Tian 2019) is a similar approach that is used to learn the relevant structure in EEG data by first learning an unsupervised pretext task, after which the model is further trained on the target task with labeled data (Banville et al 2020, Kostas et al 2021. The remainder of this review will focus on supervised learning methods.…”
Section: Machine Learningmentioning
confidence: 99%
“…A DL method that is useful in biosignal decoding is transfer learning (Fahimi et al 2019, Kostas et al 2021. With Transfer Learning, a model is first trained on general data from which the model aims to learn the structure of the data.…”
Section: Deep Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Kiyasseh et al (2020) created CLOCS, a family of contrastive learning methods on unlabeled cardiac physiological data for downstream tasks like better quantifying patient similarity for disease detection. Kostas et al (2021) created BENDR, which leverages transformers and contrastive self-supervised learning to better learn representations of electroencephalogram data. Moving to the realm of EHR data, Li et al (2019) built a framework that enhanced predictive performance for common diseases across multiple sites without the need to share data by leveraging Distributed Noise Contrastive Estimation.…”
Section: Related Workmentioning
confidence: 99%