2022
DOI: 10.3389/fonc.2022.950706
|View full text |Cite
|
Sign up to set email alerts
|

MM-UNet: A multimodality brain tumor segmentation network in MRI images

Abstract: The global annual incidence of brain tumors is approximately seven out of 100,000, accounting for 2% of all tumors. The mortality rate ranks first among children under 12 and 10th among adults. Therefore, the localization and segmentation of brain tumor images constitute an active field of medical research. The traditional manual segmentation method is time-consuming, laborious, and subjective. In addition, the information provided by a single-image modality is often limited and cannot meet the needs of clinic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…First , individual differences between different subjects are not fully considered in our scheme, which is an important issue for FBN analysis (Folville et al, 2020 ; Schabdach et al, 2022 ) and an important direction for our future improvements. Besides , since multi-modal data is taking an increasingly important place in brain analysis (Jia and Lao, 2022 ; Zhao et al, 2022 ), the performance of employing one-modal data is limited. Compared with the widely used multi-modal data model (Yu et al, 2021 ), how to adapt the scheme to the multi-modal data type is a key point in our future work.…”
Section: Discussionmentioning
confidence: 99%
“…First , individual differences between different subjects are not fully considered in our scheme, which is an important issue for FBN analysis (Folville et al, 2020 ; Schabdach et al, 2022 ) and an important direction for our future improvements. Besides , since multi-modal data is taking an increasingly important place in brain analysis (Jia and Lao, 2022 ; Zhao et al, 2022 ), the performance of employing one-modal data is limited. Compared with the widely used multi-modal data model (Yu et al, 2021 ), how to adapt the scheme to the multi-modal data type is a key point in our future work.…”
Section: Discussionmentioning
confidence: 99%