2023
DOI: 10.1109/tnsre.2022.3233656
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation via Low Rank and Class Discriminative Representation for Autism Spectrum Disorder Identification: A Multi-Site fMRI Study

Abstract: To construct a more effective model with good generalization performance for inter-site autism spectrum disorder (ASD) diagnosis, domain adaptation based ASD diagnostic models are proposed to alleviate the inter-site heterogeneity. However, most existing methods only reduce the marginal distribution difference without considering class discriminative information, and are difficult to achieve satisfactory results. In this paper, we propose a low rank and class discriminative representation (LRCDR) based multi-s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…The discriminant function for multi‐site source data can be formulated as follows: Ψ()Pgoodbreak=Ψsame()Pgoodbreak−τΨdiff()Pgoodbreak=italictr()PTXs()Dsamegoodbreak−τDdiffXsTP$$ \varPsi (P)={\varPsi}_{same}(P)-\tau {\varPsi}_{diff}(P)= tr\left({P}^T{X}_s\left({D}_{same}-\tau {D}_{diff}\right){X}_s^TP\right) $$ where Ψsame()P$$ {\varPsi}_{same}(P) $$ and Ψdiff()P$$ {\varPsi}_{diff}(P) $$ are the intra‐class and inter‐class distance loss terms, and λ is used to balance these two loss terms. DsameRns×ns$$ {D}_{same}\in {R}^{n_s\times {n}_s} $$ and DdiffRns×ns$$ {D}_{diff}\in {R}^{n_s\times {n}_s} $$ are the matrices used to compute intra‐class and inter‐class distances for projected multi‐site source data 26 …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The discriminant function for multi‐site source data can be formulated as follows: Ψ()Pgoodbreak=Ψsame()Pgoodbreak−τΨdiff()Pgoodbreak=italictr()PTXs()Dsamegoodbreak−τDdiffXsTP$$ \varPsi (P)={\varPsi}_{same}(P)-\tau {\varPsi}_{diff}(P)= tr\left({P}^T{X}_s\left({D}_{same}-\tau {D}_{diff}\right){X}_s^TP\right) $$ where Ψsame()P$$ {\varPsi}_{same}(P) $$ and Ψdiff()P$$ {\varPsi}_{diff}(P) $$ are the intra‐class and inter‐class distance loss terms, and λ is used to balance these two loss terms. DsameRns×ns$$ {D}_{same}\in {R}^{n_s\times {n}_s} $$ and DdiffRns×ns$$ {D}_{diff}\in {R}^{n_s\times {n}_s} $$ are the matrices used to compute intra‐class and inter‐class distances for projected multi‐site source data 26 …”
Section: Methodsmentioning
confidence: 99%
“…D same R n s Ân s and D diff R n s Ân s are the matrices used to compute intra-class and inter-class distances for projected multi-site source data. 26 Supposing x si and x sj are any two samples from multiple source domains, with labels y si and y sj Ψ same P ð Þ and Ψ diff P ð Þ are expressed as follows:…”
Section: Class-discriminative Learningmentioning
confidence: 99%
“…To verify the effectiveness of the proposed HMSDA method, we compared it with alternative DA methods, including: (1) LRR [32], aiming to find a low-dimensional representation for multiple domains, (2) multi-site adaption framework via low-rank representation decomposition (maLRR) [7], learning the common component from specific projection matrices of multiple source domains as the projection matrix of the target domain, (3) transfer subspace learning via low-rank and sparse representation (TSL_LRSR) [33], projecting the data of the two domains into the common subspace to learn the domain invariant feature representation, (4) geodesic flow kernel (GFK) [34], leveraging low-dimensional data structures to address the dissimilarity in data distributions between the source and target domains, (5) low-rank and class discriminative representation (LRCDR) [35],…”
Section: Comparison With 6 Da Methodsmentioning
confidence: 99%
“…The regularization parameters for LRR, maLRR, TSL_LRSR, LRCDR and MSLRDA were selected from the set [1e -5 , 1e -4 , 1e -3 , 1e -2 , 1e -1 ], and the parameter (i.e., dimensionality of the subspace) in GFK was chosen from [5, 10, …, 50] via 5-fold cross-validation. It is worth noting that for maLRR, TSL_LRSR, GFK and LRCDR, we followed an identical experimental setup as Liu et al [35] to perform dimensionality reduction for obtaining the optimal results. We conducted experiments separately on the FBIRN and ABIDE I datasets, each domain was selected as the target domain in turn, and the remaining domains were treated as the source domains.…”
Section: Comparison With 6 Da Methodsmentioning
confidence: 99%