2022
DOI: 10.48550/arxiv.2208.00877
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion Recognition

Abstract: The progress of EEG-based emotion recognition has received widespread attention from the fields of human-machine interactions and cognitive science in recent years. However, how to recognize emotions with limited labels has become a new research and application bottleneck. To address the issue, this paper proposes a Self-supervised Group Meiosis Contrastive learning framework (SGMC) based on the stimuli consistent EEG signals in human being. In the SGMC, a novel geneticsinspired data augmentation method, named… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…SimCLR, BYOL, CPC) already proposed for general purpose tasks and introduce some domain knowledge modifications to better suit the investigated medical task. Some works proposed more biological inspired data augmentation techniques for the generation of the positive/negative pairs [113]. Others focused on the way similarity between pairs is evaluated, for example by modifying the objective learning function.…”
Section: Discussion and Open Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…SimCLR, BYOL, CPC) already proposed for general purpose tasks and introduce some domain knowledge modifications to better suit the investigated medical task. Some works proposed more biological inspired data augmentation techniques for the generation of the positive/negative pairs [113]. Others focused on the way similarity between pairs is evaluated, for example by modifying the objective learning function.…”
Section: Discussion and Open Challengesmentioning
confidence: 99%
“…Zhang et al [117] proposed GANSER, a generative self-supervised framework which combines an adversarial augmentation network (AAN) and a multi-factor training network (MTN). Finally, Shen et al [115] and Kan et al [113] proposed two novel contrastive learning approaches. The first, Contrastive learning for Inter-Subject Alignment (CLISA), tries to maximize the similarity in EEG signal representations across subjects who received the same (emotional) stimuli.…”
Section: B Self-supervised Learning On Eegmentioning
confidence: 99%
“…[143] and Kan et al . [141] proposed two novel contrastive learning approaches. The first, Contrastive Learning for Inter-Subject Alignment (CLISA), tries to maximize the similarity in EEG signal representations across subjects who received the same emotional stimuli, hence without resorting to standard data augmentation procedures.…”
Section: ) Emotion Recognitionmentioning
confidence: 99%
“…To achieve the main goal of learning universal feature representation, which allows the use of these universal representations in multiple tasks in the fine-tuning stage, multiple forms of pre-tasks and corresponding objective functions need to be used to learn feature representation for different forms of tasks. Compared with supervised learning that completes training using labeled data, self-supervised learning can extract internal features of data, making them suitable for downstream tasks, thereby improving the performance of downstream tasks [6] . Through self-supervised learning, largescale unlabeled data can be more effectively utilized to improve the performance of models, which cannot be achieved by supervised learning.…”
Section: Self-supervised Learningmentioning
confidence: 99%