2011
DOI: 10.1016/j.neuroimage.2010.09.074
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised cluster analysis of imaging data

Abstract: In this paper, we present a semi-supervised clustering-based framework for discovering coherent subpopulations in heterogeneous image sets. Our approach involves limited supervision in the form of labeled instances from two distributions that reflect a rough guess about subspace of features that are relevant for cluster analysis. By assuming that images are defined in a common space via registration to a common template, we propose a segmentation-based method for detecting locations that signify local regional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
44
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 48 publications
(45 citation statements)
references
References 23 publications
1
44
0
Order By: Relevance
“…Duchesne et al used the morphological factor method based on MRI data and presented an accuracy of 72.3 % on the dataset of 20 MCI-C and 29 MCI-NC subjects (Duchesne and Mouiha 2011). Unlike the approaches of reducing feature dimensionality for addressing the small-sample-size problem, several groups have applied a semi-supervised learning (SSL) method by increasing the number of training samples with unlabeled samples, which are often much easier to obtain (Cheng et al 2013a; Filipovych et al 2011a, b; Zhang and Shen 2011). …”
Section: Introductionmentioning
confidence: 99%
“…Duchesne et al used the morphological factor method based on MRI data and presented an accuracy of 72.3 % on the dataset of 20 MCI-C and 29 MCI-NC subjects (Duchesne and Mouiha 2011). Unlike the approaches of reducing feature dimensionality for addressing the small-sample-size problem, several groups have applied a semi-supervised learning (SSL) method by increasing the number of training samples with unlabeled samples, which are often much easier to obtain (Cheng et al 2013a; Filipovych et al 2011a, b; Zhang and Shen 2011). …”
Section: Introductionmentioning
confidence: 99%
“…Semi-supervised clustering learning from a combination of side information and unlabeled data can be usually seen as such a class of appealing clustering approaches. Specifically, semi-supervised clustering, the use of class labels, pairwise constraints, or prior membership degrees on some instances to aid unsupervised clustering, has attracted great interest both in theory and in practice because of requiring less human effort and giving higher accuracy [1][2][3][4][5][6][7][8][9][10][11][12][19][20][21].…”
Section: Introductionmentioning
confidence: 99%
“…There have been considerable researches on SHCM over the past few years. This family of algorithms include constrained K-means [1], PCK-means [2], MPCK-means [3,4], enhancing semisupervised clustering [5], constraint-based clustering [6], kernel semi-supervised clustering [7,8], metric-based clustering [9], and other extension works [10][11][12]. Constrained K-means adjusts the cluster memberships to be consistent with pairwise constraints [1].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The same problem also exists in the classification between AD and NC, where the number of AD and NC subjects are also limited. Recently, to enhance the classification between AD and NC, several studies have used semi-supervised learning (SSL) methods [8] in AD diagnosis, where MCI subjects are treated as unlabeled data to aid AD classification [9][10][11]. Semi-supervised learning methods can efficiently utilize unlabeled samples to improve classification and regression performance, but it requires unlabeled samples and labeled samples coming from the same data distribution [8], which is usually not satisfied in practice.…”
Section: Introductionmentioning
confidence: 99%