2003
DOI: 10.1109/tnsre.2003.814484
|View full text |Cite
|
Sign up to set email alerts
|

Linear and nonlinear methods for brain-computer interfaces

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
165
0
3

Year Published

2006
2006
2016
2016

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 331 publications
(170 citation statements)
references
References 22 publications
2
165
0
3
Order By: Relevance
“…One explanation for this decrease is that bad model selection strategies have resulted in overly complex classification models that overfit the EEG data [23]. The current work has clearly shown that an alternative reason for failure should also be considered: non-stationarities in the EEG statistics.…”
Section: Resultsmentioning
confidence: 90%
See 1 more Smart Citation
“…One explanation for this decrease is that bad model selection strategies have resulted in overly complex classification models that overfit the EEG data [23]. The current work has clearly shown that an alternative reason for failure should also be considered: non-stationarities in the EEG statistics.…”
Section: Resultsmentioning
confidence: 90%
“…In this setup, we use linear discriminant analysis (LDA) to separate data points with high accuracy into classes in the low-dimensional feature space. Note that more elaborate paradigms or other feature extraction techniques may require the use of non-linear classifiers (cf [21,23,25,31]). Previous work [3,5,8] has reported on the efficacy of our classification scheme.…”
Section: The Berlin Bcimentioning
confidence: 99%
“…That is, we select the trial that has all the remaining trials forming a matrix with the largest 1-norm for its singular values (similar to the strategy used with compressive sensing (Candè & Wakin, 2008 (Lotte, Congedo, Lécuyer, Lamarche, & Arnaldi, 2007), (Müller, Anderson, & Birch, 2003), (Croux, Filzmoser, & Joossens, 2008 (Müller et al, 2003), (Croux et al, 2008). However, these techniques are black box models which are difficult to understand and analyze by a normal clinician.…”
Section: Singular Value Decomposition (Svd) Methodsmentioning
confidence: 99%
“…The second advantage is that one may use non-linear similarity measures to construct K, which is equivalent to performing linear classification on data points that have been mapped into a higher-dimensional feature space and which can consequently yield a more powerful classifier, without the requirement that the feature-space mapping be known explicitly (the socalled kernel trick ). However, it has generally been observed in BCI classification applications (for example, see Müller et al, 2003) that, given a well-chosen sequence of preprocessing steps (an explicit feature mapping), a further implicit mapping is usually unnecessary: thus a linear classifier, in which K ij is equal to the dot-product between the feature representations of data points i and j, performs about as well as any non-linear classifier one might attempt. This is often the case in situations in which the number of data points is low, and indeed we find it to be the case in the current application.…”
Section: Support Vector Machinesmentioning
confidence: 99%