2009 International Conference on Artificial Intelligence and Computational Intelligence 2009
DOI: 10.1109/aici.2009.152
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Quadtree Decomposition Segmentation of Liver MR Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Therefore, liver segmentation studies are mostly in CT images in the present literature [3][4][5][6][7][8]. There exist only a few studies in the literature for the liver image segmentation from MR data sets, which have been proposed by using snakes [9], fast marching method [10], feed forward neural network [11], fuzzy c-means based segmentation [12,13], graph-cut approach [14], synchronized oscillator network [15], active shape model [16,17], watershed [18], iterative quadtree decomposition method [19], Gaussian model and markov random field [20], modified region growing [21], and free form registration on manually segmented CT images [22]. Some of these methods are time consuming and have complex calculations such as active contour based approach [9,10 16,17,22] or the used MR image modality characteristics are not clearly identified [11,15,18].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, liver segmentation studies are mostly in CT images in the present literature [3][4][5][6][7][8]. There exist only a few studies in the literature for the liver image segmentation from MR data sets, which have been proposed by using snakes [9], fast marching method [10], feed forward neural network [11], fuzzy c-means based segmentation [12,13], graph-cut approach [14], synchronized oscillator network [15], active shape model [16,17], watershed [18], iterative quadtree decomposition method [19], Gaussian model and markov random field [20], modified region growing [21], and free form registration on manually segmented CT images [22]. Some of these methods are time consuming and have complex calculations such as active contour based approach [9,10 16,17,22] or the used MR image modality characteristics are not clearly identified [11,15,18].…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, in comparison with CT-based liver segmentation approaches [7,, there are fewer studies for liver image segmentation from MR datasets. The present approaches for MR-based image segmentation in the literature can be listed as fuzzy c-means classification [32,33], graph-cut [34], snakes [35], the level set method [36,37], the synchronized oscillator network [38], the active shape model [39,40], watershed [41], iterative quadtree decomposition [42], the Gaussian model and Markov random field [43], modified region growing [44], and the application of free-form registration on manually segmented CT images [45]. At present, it is clear that there is no method capable of simultaneously solving all of the problems of different modality characteristics, atypical liver shapes, and similar gray values with adjacent tissues.…”
Section: Introductionmentioning
confidence: 99%
“…Although these hybrid methods were shown to be effective in segmenting large objects, because of imaging modality differences and the notion of individual organ segmentation in our case using a specific MR sequence for the liver, a full evaluation and comparison to the methods in this work is outside the scope of their paper. In the present literature, most methods developed for automatic liver segmentation from MR images have either over-or undersegmentation or leakage problems [42][43][44], are tested with only a few datasets [34,44], or have complex calculations such as active contour-based approaches [45]. In [41], the watershed transformation and neural networks are used for liver detection without identifying the modality characteristics of the MR images used.…”
Section: Introductionmentioning
confidence: 99%