2021
DOI: 10.1109/tcyb.2019.2940526
|View full text |Cite
|
Sign up to set email alerts
|

Fused Sparse Network Learning for Longitudinal Analysis of Mild Cognitive Impairment

Abstract: Mild cognitive impairment (MCI) is often at high risk of progression to Alzheimer's disease (AD).Existing works to identify the progressive MCI (pMCI) typically require MCI subtype labels, pMCI vs. stable MCI (sMCI), determined by whether or not an MCI patient will progress to AD after a long follow-up. However, prospectively acquiring MCI subtype data is time-consuming and resource-intensive; the resultant small datasets could lead to severe overfitting and difficulty in extracting discriminative information.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 60 publications
(25 citation statements)
references
References 108 publications
0
24
1
Order By: Relevance
“…where and indicate Lagrangian multipliers. According to the KKT condition, when equals to , (29) has the minimum value equal to zero. Thus, we can get:…”
Section: E Convergence Analysis Of Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…where and indicate Lagrangian multipliers. According to the KKT condition, when equals to , (29) has the minimum value equal to zero. Thus, we can get:…”
Section: E Convergence Analysis Of Algorithmmentioning
confidence: 99%
“…In addition, in most existing studies, classification and regression are performed only based on the baseline data [27,28], while the longitudinal data (i.e., multi-time points data) are ignored. Owing to the persistent exacerbation of the disease, it is imperative to learn reliable classification and prediction models that meet multi-time points [29]. We highlight our contributions: 1) We propose a novel unsupervised learning method from longitudinal multimodal data for feature selection.…”
Section: Introductionmentioning
confidence: 99%
“…Local features after fusion Our goal is to simulate the information processing pattern of human brain to extract structural information of BFN, which further improves the detection performance of the aMCI system. Consequently, mathematical modelling about BFN is implemented, which contains information about the interactions between brain regions [33][34][35]. We perform the rough feature extraction module to extract the structural features; that is, the information of brain regions with higher correlation will be maintained after dimensionality reduction.…”
Section: Concatenatementioning
confidence: 99%
“…Most applications are in the regime of supervised learning. Typically, a neural network takes an fMRI-based input data and is trained to generate an output that optimally matches the ground truth for a task, such as individual identification (Chen and Hu, 2018;Wang et al, 2019), prediction of gender, age, or intelligence (Fan et al, 2020;Gadgil et al, 2020;Plis et al, 2014), disease classification (Seo et al, 2019;Suk et al, 2016;Wang et al, 2020;Yang et al, 2019;Zou et al, 2017). The labels required for supervised learning are often orders of magnitude smaller in size than the fMRI data itself, which has a high dimension in both space and time.…”
Section: Introductionmentioning
confidence: 99%
“…The labels required for supervised learning are often orders of magnitude smaller in size than the fMRI data itself, which has a high dimension in both space and time. As a result, the prior studies often limit the model capacity by using a shallow network and/or limit the input data to activity at the region of interest (ROI) level (Chen and Hu, 2018;Dvornek et al, 2018;Koppe et al, 2019;Matsubara et al, 2019;Suk et al, 2016;Wang et al, 2019;Wang et al, 2020) or reduce it to functional connectivity (D'Souza et al, 2019;Fan et al, 2020;Kawahara et al, 2017;Kim and Lee, 2016;Riaz et al, 2020;Seo et al, 2019;Venkatesh et al, 2019;Yang et al, 2019;Zhao et al, 2018). It is also uncertain to what extent representations learned for a specific task would be generalizable to other tasks.…”
Section: Introductionmentioning
confidence: 99%