2013
DOI: 10.3923/jse.2014.14.22
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Semi-Supervised Clustering Algorithm with Label Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 5 publications
0
13
0
Order By: Relevance
“…The selection of the optimal clustering algorithm was motivated by the highest ratio of between-cluster to total variance and the best stability measured by mean classification error in 20-fold cross-validation (CV) (Figure 6 -figure supplement 1AB, Figure 7 -figure supplement 1AB) (37). The optimal cluster number was determined by the bend of the within-cluster sum-of-squares curve (function fviz_nbclust(), package factoextra) and by the stability in 20-fold CV (Figure 6 figure supplement 1CD, Figure 7 -figure supplement 1DF) (37,38) 3) (10,39). Cluster assignment visualization in a 4-dimensional principal analysis score plot was done with the PCAproj() tool (package pcaPP) (40).…”
Section: Cluster Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…The selection of the optimal clustering algorithm was motivated by the highest ratio of between-cluster to total variance and the best stability measured by mean classification error in 20-fold cross-validation (CV) (Figure 6 -figure supplement 1AB, Figure 7 -figure supplement 1AB) (37). The optimal cluster number was determined by the bend of the within-cluster sum-of-squares curve (function fviz_nbclust(), package factoextra) and by the stability in 20-fold CV (Figure 6 figure supplement 1CD, Figure 7 -figure supplement 1DF) (37,38) 3) (10,39). Cluster assignment visualization in a 4-dimensional principal analysis score plot was done with the PCAproj() tool (package pcaPP) (40).…”
Section: Cluster Analysismentioning
confidence: 99%
“…Participant clustering and machine learning classifiers trained in the CovILD cohort were implemented in an open-source online pulmonary assessment R shiny app (https://im2ibk.shinyapps.io/CovILD/, code: https://github.com/PiotrTymoszuk/COVILD-recovery-assessmentapp). Prediction of the cluster assignment based on the user-provided patient data is done by the kNN label propagation algorithm (10,39).…”
Section: Pulmonary Recovery Assessment Appmentioning
confidence: 99%
See 2 more Smart Citations
“…Clusters of the training AT cohort individuals were defined with the self-organized map procedure (SOM, 13 × 13 unit hexagonal grid, Manhattan distance, package Kohonen ) and subsequent hierarchical clustering (Ward D2 algorithm, Manhattan distance) ( 20 , 21 , 31 ). Assignment of the test IT cohort participants to the clusters was done with the k-nearest neighbor label propagation algorithm ( 12 , 32 ). Details of statistical analysis are provided in Supplementary Methods .…”
Section: Methodsmentioning
confidence: 99%