2012
DOI: 10.1109/mpul.2011.2181023
|View full text |Cite
|
Sign up to set email alerts
|

Magnetic Resonance Connectome Automated Pipeline: An Overview

Abstract: This article presents a novel, tightly integrated pipeline for estimating a connectome. The pipeline utilizes magnetic resonance (MR) imaging (MRI) data to produce a high-level estimate of the structural connectivity in the human brain. The MR connectome automated pipeline (MRCAP) is efficient, and its modular construction allows researchers to modify algorithms to meet their specific requirements. The pipeline has been validated, and more than 200 connectomes have been processed and analyzed to date.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(26 citation statements)
references
References 26 publications
1
25
0
Order By: Relevance
“…The current iteration of our software in the LONI Pipeline results in significant improvements to both scalability and processing time relative to the MRCAP baseline [5], which produces a small graph in approximately 10 hours on our small cluster (248 concurrent nodes, 1 TB total RAM). On average, the MIGRAINE baseline takes approximately 3 hours/subject to compute small graphs (i.e., the output from MRCAP), an additional 5 hours/subject to produce big graphs, and 3.5 hours/subject for graph invariants, for a total of 11.5 hours/subject.…”
Section: A Scalabilitymentioning
confidence: 98%
“…The current iteration of our software in the LONI Pipeline results in significant improvements to both scalability and processing time relative to the MRCAP baseline [5], which produces a small graph in approximately 10 hours on our small cluster (248 concurrent nodes, 1 TB total RAM). On average, the MIGRAINE baseline takes approximately 3 hours/subject to compute small graphs (i.e., the output from MRCAP), an additional 5 hours/subject to produce big graphs, and 3.5 hours/subject for graph invariants, for a total of 11.5 hours/subject.…”
Section: A Scalabilitymentioning
confidence: 98%
“…Graphs are an increasingly popular data modality in scientific research and statistical inference, with diverse applications in connectomics [6], social network analysis [7], and pattern recognition [21], to name a few. Many joint graph inference methodologies (see, for example, [45,18,6,37]), joint graph embedding algorithms (see, for example, [19,34,41,39]) and graph-valued time-series methodologies (see, for example, [23,33,46,50]) operate under the implicit assumption that an explicit vertex correspondence is a priori known across the vertex sets of the graphs. While this assumption is natural in a host of real data settings, in many applications these correspondences may be unobserved and/or errorfully observed [48].…”
Section: Introductionmentioning
confidence: 99%
“…Connectomics offers a striking example of this continuum. Indeed, while for some simple organisms (e.g., the C. elegans roundworm [52]) explicit neuron labels are known across specimen, and in human DTMRI connectomes, the vertices are often regions of the brain registered to a common template (see [18]), explicit cross-subject neuron labels are often unknown for more complex organisms.…”
Section: Introductionmentioning
confidence: 99%
“…In connectomics, brain imaging data for each subject can be processed to output a graph, where each vertex represents a well-defined anatomical region present in each subject. For structural brain imaging, such as diffusion tensor MRI, an edge may represent the presence of anatomical connections between the two regions as estimated using tractography algorithms [14]. For functional brain imaging, such as fMRI, an edge between two regions may represent the presence of correlated brain activity between the two regions.…”
Section: Statistical Connectome Modelsmentioning
confidence: 99%