2012
DOI: 10.1080/17445760.2012.662681
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical-based parallel technique for HMM 3D MRI brain segmentation algorithm

Abstract: This paper proposes a hidden Markov model (HMM) algorithm for 3D MRI brain segmentation using a hierarchical/multi-level parallel implementation. The new technique is implemented using standard message passing interface (MPI). Two platforms are used to test the proposed technique namely PC-cluster system and IBM Blue Gene (BG)/L system. On PC-cluster system, hierarchical-based parallel HMM algorithm achieves a twofold speedup on a three nodes cluster and a threefold speedup on a six nodes cluster. Communicatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2012
2012
2012
2012

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…This approach targets high-performance processing of the HMM application developed in [12] per iteration in a shorter wall clock period. The approach presented in this subsection is adopted from the technique presented in [15] for a PC cluster for the same HMM application. Our primary target is to analyze the HMM training algorithm in terms of data dependency, modular components, and dynamic call structure among the components involved in the computations of the HMM training phase to select the best work distribution technique for the application.…”
Section: High-performance Hierarchical-based Parallelization Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…This approach targets high-performance processing of the HMM application developed in [12] per iteration in a shorter wall clock period. The approach presented in this subsection is adopted from the technique presented in [15] for a PC cluster for the same HMM application. Our primary target is to analyze the HMM training algorithm in terms of data dependency, modular components, and dynamic call structure among the components involved in the computations of the HMM training phase to select the best work distribution technique for the application.…”
Section: High-performance Hierarchical-based Parallelization Approachmentioning
confidence: 99%
“…According to the HMM training algorithm and performance analysis in [15], we realized that hierarchical and hybrid approaches using both single program multiple data (SPMD) and function decomposition to parallelize the training task are the best to fit the application to be parallelized. We propose a multilevel hierarchical approach to achieve a load-balanced task distribution.…”
Section: High-performance Hierarchical-based Parallelization Approachmentioning
confidence: 99%