2019
DOI: 10.1109/tsp.2019.2894801
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Transport on the Cone Manifold of SPD Matrices for Domain Adaptation

Abstract: The problem of domain adaptation has become central in many applications from a broad range of fields. Recently, it was proposed to use Optimal Transport (OT) to solve it. In this paper, we model the difference between the two domains by a diffeomorphism and use the polar factorization theorem to claim that OT is indeed optimal for domain adaptation in a well-defined sense, up to a volume preserving map. We then focus on the manifold of Symmetric and Positive-Definite (SPD) matrices, whose structure provided a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 96 publications
(90 citation statements)
references
References 29 publications
0
90
0
Order By: Relevance
“…Another geometrical approach [9] matches the statistical distributions from the source and target sessions by means of a re-centering of their data points to the origin of the SPD space, the identity matrix. During the process of review of this paper, we came across the work in [10], which also proposes a Transfer Learning procedure for SPD matrices in the spirit of [9], but with the main difference that the data points are re-centered to the midpoint between the geometric means of the source and target datasets Besides of distribution matching based on geometrical transformations, there has been mainly two other kinds of proposals for doing Transfer Learning in the BCI literature [2]. One is based on the concept of ensemble classifiers [3], [11]- [13], where the information from multiple source datasets are combined into a "global" classifier, which is then used to label the trials from any other target dataset.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Another geometrical approach [9] matches the statistical distributions from the source and target sessions by means of a re-centering of their data points to the origin of the SPD space, the identity matrix. During the process of review of this paper, we came across the work in [10], which also proposes a Transfer Learning procedure for SPD matrices in the spirit of [9], but with the main difference that the data points are re-centered to the midpoint between the geometric means of the source and target datasets Besides of distribution matching based on geometrical transformations, there has been mainly two other kinds of proposals for doing Transfer Learning in the BCI literature [2]. One is based on the concept of ensemble classifiers [3], [11]- [13], where the information from multiple source datasets are combined into a "global" classifier, which is then used to label the trials from any other target dataset.…”
Section: Introductionmentioning
confidence: 99%
“…RPA can be seen as an evolution of the aforementioned procedures [9] and [10], with the re-centering step corresponding to the first of a series of geometrical transformations. Furthermore, the procedures in [9] and [10] are completely unsupervised, since they do not use any information from the labels of the data points, whereas RPA benefits from the labels in the source session (which are all known in advance) as well as from (at least part of) the labels that become sequentially available in the target session trial after trial.…”
Section: Introductionmentioning
confidence: 99%
“…Our approach can then be useful in a domain-adaptation context, by constructing meaningful realizations of the input and output ("source" and "target") domains. It would be interesting to compare the Mahalanobis-like metric driven observation fusion with other registration approaches developed in the domain-adaptation literature [42,43]. There is a conceptual similarity to the Dynamic Laplacian [44], in that, to gather covariance information, we have to start at several nearby initial trial points and then perform the trial associated with each one of them.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, many Transfer Learning approaches have been introduced into BCIs to reduce cross-subject variability (Zanini et al, 2018;Rodrigues et al, 2019;Yair et al, 2019). An approach named Riemannian Procrustes Analysis (RPA) was proposed by Rodrigues et al (2019).…”
Section: Online Pre-alignment Strategymentioning
confidence: 99%