2019
DOI: 10.1016/j.array.2019.100004
|View full text |Cite
|
Sign up to set email alerts
|

A review: Deep learning for medical image segmentation using multi-modality fusion

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
218
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 434 publications
(221 citation statements)
references
References 29 publications
0
218
0
3
Order By: Relevance
“…However, it should be noted that these techniques have been used for single-modal images (MRI or PET) in the majority of the work. Unfortunately, few works have exploited multimodality for the diagnosis of AD like [10,17,46,154,184,186,187,206,207], which seems to be better achieved by serving the advantages of several types of images, each measuring a different type of structural or functional characteristic. In reality, the various imaging environments and modalities offer complementary information that is useful when used in conjunction.…”
Section: Critical Discussion About the Multimodal Diagnosis Of Admentioning
confidence: 99%
See 1 more Smart Citation
“…However, it should be noted that these techniques have been used for single-modal images (MRI or PET) in the majority of the work. Unfortunately, few works have exploited multimodality for the diagnosis of AD like [10,17,46,154,184,186,187,206,207], which seems to be better achieved by serving the advantages of several types of images, each measuring a different type of structural or functional characteristic. In reality, the various imaging environments and modalities offer complementary information that is useful when used in conjunction.…”
Section: Critical Discussion About the Multimodal Diagnosis Of Admentioning
confidence: 99%
“…Artificial neural networks: ANNs used in several works related to neuroimaging [38][39][40][41][42][43][44][45][46] • Threshold-based techniques: the easiest way is to convert a grayscale image to a binary using a threshold value [68]. Pixels lighter than the threshold are white pixels in the resulting image and darker pixels are black pixels.…”
mentioning
confidence: 99%
“…to realize multi-organ segmentation [17] and lesion segmentation [18,19] is widely adopted due to distinct responses of different modalities datasets for different tissues. According to the review [20] of deep learning for medical image segmentation using multi-modality fusion, multi-modal segmentation network architectures can be categorized into input-level fusion network, layer-level fusion network and decision fusion network. The input-level fusion network [21,22] stacks multi-modality images channel-wise and directly feeds them into neural network to make final decisions.…”
Section: Multi-modal Fusionmentioning
confidence: 99%
“…To overcome this problem, we adopted a novel 3D patch-based approach for training with weighted sampling. Zhou et al (2019) reviewed 2D and 3D patch extraction methods along with several types of loss FIGURE 2 | Patch center localization by randomly selecting x, y, z coordinates in brain volume.…”
Section: Patch Extractionmentioning
confidence: 99%