2022
DOI: 10.1007/s12021-022-09593-4
|View full text |Cite
|
Sign up to set email alerts
|

Auto-encoded Latent Representations of White Matter Streamlines for Quantitative Distance Analysis

Abstract: Parcellation of whole brain tractograms is a critical step to study brain white matter structures and connectivity patterns. The existing methods based on supervised classification of streamlines into predefined streamline bundle types are not designed to explore sub-bundle structures, and methods with manually designed features are expensive to compute streamline-wise similarities. To resolve these issues, we propose a novel atlas-free method that learns a latent space using a deep recurrent auto-encoder trai… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 52 publications
(56 reference statements)
0
1
0
Order By: Relevance
“…9 Zhong et al, for example, encoded streamlines with a recurrent autoencoder and used the embeddings for bundle parcellation. 10 A similar approach, using a convolutional autoencoder, was developed for tractogram filtering. 4 While these studies show that the learned embeddings can retain bundle information such as their shapes and positions, the latent space for standard autoencoders is not continuous and the model is often prone to overfitting, 11 making it difficult to evaluate embeddings on unseen data, and use them for population analyses which involve large amounts of data.…”
Section: Introductionmentioning
confidence: 99%
“…9 Zhong et al, for example, encoded streamlines with a recurrent autoencoder and used the embeddings for bundle parcellation. 10 A similar approach, using a convolutional autoencoder, was developed for tractogram filtering. 4 While these studies show that the learned embeddings can retain bundle information such as their shapes and positions, the latent space for standard autoencoders is not continuous and the model is often prone to overfitting, 11 making it difficult to evaluate embeddings on unseen data, and use them for population analyses which involve large amounts of data.…”
Section: Introductionmentioning
confidence: 99%