2023
DOI: 10.3389/fncom.2023.1120566
|View full text |Cite
|
Sign up to set email alerts
|

An N400 identification method based on the combination of Soft-DTW and transformer

Abstract: As a time-domain EEG feature reflecting the semantic processing of the human brain, the N400 event-related potentials still lack a mature classification and recognition scheme. To address the problems of low signal-to-noise ratio and difficult feature extraction of N400 data, we propose a Soft-DTW-based single-subject short-distance event-related potential averaging method by using the advantages of differentiable and efficient Soft-DTW loss function, and perform partial Soft-DTW averaging based on DTW distanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…On the other hand, WDTW (c) and softDTW (e) do not take shape features into account. Moreover, the window constraints of WDTW [ 41 ] and the softmin [ 42 ] operation in softDTW cause information loss, resulting in slightly lower classification performance compared to the other methods.…”
Section: Methodsmentioning
confidence: 99%
“…On the other hand, WDTW (c) and softDTW (e) do not take shape features into account. Moreover, the window constraints of WDTW [ 41 ] and the softmin [ 42 ] operation in softDTW cause information loss, resulting in slightly lower classification performance compared to the other methods.…”
Section: Methodsmentioning
confidence: 99%
“…The Transformer is a deep learning architecture for sequence processing such as natural language processing, with a multihead self-attention module that captures long-range dependencies within sequences (Vaswani et al, 2017). The Transformer is used not only in language models but also in computer vision, audio processing, and time series forecasting (Lim et al, 2021;Wen et al, 2022;Ma et al, 2023). Recently, the Transformer architecture has also been applied to modeling temporal point processes.…”
Section: Transformer-based Hawkes Processmentioning
confidence: 99%
“…The Transformer is a deep learning architecture for sequence processing such as natural language processing, with a multi-head self-attention module that captures long-range dependencies within sequences (Vaswani et al, 2017 ). The Transformer is used not only in language models but also in computer vision, audio processing, and time series forecasting (Lim et al, 2021 ; Wen et al, 2022 ; Ma et al, 2023 ). Recently, the Transformer architecture has also been applied to modeling temporal point processes.…”
Section: Introductionmentioning
confidence: 99%