2018
DOI: 10.1109/tbme.2017.2742541
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning: A Riemannian Geometry Framework With Applications to Brain–Computer Interfaces

Abstract: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
249
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 301 publications
(252 citation statements)
references
References 32 publications
3
249
0
Order By: Relevance
“…The reader should note that the above estimation approach is applied for each target-conditional distribution individually; hence, if the number of targets is 8 as in the eye movement decoding problem, we need to estimate 8 corresponding linear transformation matrices H, one for each target-conditional distribution. In light of this, from (10) and (11) we observe that the computation of both θ and H involves inverse of the target-conditional covariance matrices. In a limited data scenario, we need to make sure that the covariance matrices are well-conditioned such that they can be adequately inverted.…”
Section: ) Transfer Functionsmentioning
confidence: 99%
“…The reader should note that the above estimation approach is applied for each target-conditional distribution individually; hence, if the number of targets is 8 as in the eye movement decoding problem, we need to estimate 8 corresponding linear transformation matrices H, one for each target-conditional distribution. In light of this, from (10) and (11) we observe that the computation of both θ and H involves inverse of the target-conditional covariance matrices. In a limited data scenario, we need to make sure that the covariance matrices are well-conditioned such that they can be adequately inverted.…”
Section: ) Transfer Functionsmentioning
confidence: 99%
“…See the SM for the complete Table 1(b). We consider only 5 subjects out of the available 9 as in Zanini et al [2018] and Yair et al [2019]. This is because the single-session single-subject classification results on data from each of the remaining 4 subjects were poor, see Ang et al [2012], Barachant et al [2012], Zanini et al [2018].…”
Section: Motor Imagery Taskmentioning
confidence: 99%
“…Table 2 contains the classification precision obtained by Algorithm 1 and the summary of the mean performance of the algorithms. The N\A results were not reported by Zanini et al [2018], since these subjects were classified as "bad" subjects. We observe that Algorithm 1 provides the best results overall.…”
Section: Event Related Potential P300 Taskmentioning
confidence: 99%
“…We can also understand EA as a correction of data shift. If we view each EEG covariance matrix as a point on a Riemannian manifold, then individual differences cause shifts of these points, although they may entail more than just a simple displacement [26]. In order to correct this shift, EA moves the covariance matrices of each subject to center them at the identity matrix.…”
Section: Euclidean Alignment (Ea)mentioning
confidence: 99%