2022 IEEE 32nd International Workshop on Machine Learning for Signal Processing (MLSP) 2022
DOI: 10.1109/mlsp55214.2022.9943383
|View full text |Cite
|
Sign up to set email alerts
|

Learning Shared Neural Manifolds from Multi-Subject FMRI Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 22 publications
0
11
0
Order By: Relevance
“…This allows PHATE manifolds to reflect local and global manifold structure and perform effective denoising and makes PHATE well suited for the high dimensionality and intrinsic noise of fMRI activity. Despite the low reliability 36 and aforementioned challenges of developmental task-based neuroimaging data, we suspected that individual differences in brain function would be highlighted in low-dimensional PHATE manifold and would, in turn, better relate to cognition and mental health relative to voxel data 22,24 .…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…This allows PHATE manifolds to reflect local and global manifold structure and perform effective denoising and makes PHATE well suited for the high dimensionality and intrinsic noise of fMRI activity. Despite the low reliability 36 and aforementioned challenges of developmental task-based neuroimaging data, we suspected that individual differences in brain function would be highlighted in low-dimensional PHATE manifold and would, in turn, better relate to cognition and mental health relative to voxel data 22,24 .…”
Section: Methodsmentioning
confidence: 99%
“…This resulted in a participants-by-voxels matrix for each region and contrast, which was then embedded using PHATE into 20 dimensions to get a participants-by-20 PHATE dimensions matrix. Dimensionality was selected based on prior literature 21,23 and was kept consistent across regions, contrasts, and embedding methods.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations