2018
DOI: 10.1109/mci.2018.2807039
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Learning for Brain-Computer Interfaces Based on Event-Related Potentials: Review and Online Comparison [Research Frontier]

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 61 publications
0
9
0
Order By: Relevance
“…Summary of the RPA method Algorithm 1 recapitulates the steps of a classification task using RPA for matching the statistical distributions of the source and target datasets. 2 Re-center the matrices in S and T using (11) and (12), and form new datasets S (rct) and T (rct) = T (rct) ∪ T (rct) u 3 Calculate the ratio of dispersions in S (rct) and T (rct) as in (16) and use it to form the new dataset T (str) = T (str) ∪ T (str) u with matrices as described in (15) 4 Estimate matrices M k and M k for k ∈ {1, . .…”
Section: E Classification On the Transformed Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…Summary of the RPA method Algorithm 1 recapitulates the steps of a classification task using RPA for matching the statistical distributions of the source and target datasets. 2 Re-center the matrices in S and T using (11) and (12), and form new datasets S (rct) and T (rct) = T (rct) ∪ T (rct) u 3 Calculate the ratio of dispersions in S (rct) and T (rct) as in (16) and use it to form the new dataset T (str) = T (str) ∪ T (str) u with matrices as described in (15) 4 Estimate matrices M k and M k for k ∈ {1, . .…”
Section: E Classification On the Transformed Datasetsmentioning
confidence: 99%
“…There has also been works using Bayesian models to describe the variability of the statistics on the source datasets and gather information from multiple datasets [14]. A recent approach that builds upon such Bayesian methods are the works in [15] and [16], which use a special form of the P300 experimental paradigm to do classification with no calibration. There are two main differences between these approaches and our proposal in this work.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…BCI speller), it becomes insufficient or even inappropriate when lacking labels and elicitations come from complex and ambiguous inputs. Classes may become unbalanced, and labels absent from the design of the task [47,18]. There should be, however, a minimal amount of found classes involved in particular sensory discrimination under a logical framework, once provided with a standardized presentation of these stimuli [8,7].…”
Section: Hypothesesmentioning
confidence: 99%
“…In adaptive classifiers, parameters are incrementally re-estimated and updated over time as new EEG data become available. Unsupervised adaptation methods [Vidaurre et al, 2010, Hübner et al, 2018 are especially useful as they learn from unlabeled data gained from the actual usage of BCIs. Transfer learning techniques [Jayaram et al, 2016, Wu, 2018, Kindermans et al, 2014 use data from the same or similar tasks recorded from current or different subjects to improve the performance.…”
Section: Introductionmentioning
confidence: 99%