2023
DOI: 10.1002/mp.16695
|View full text |Cite
|
Sign up to set email alerts
|

Quantifying U‐Net uncertainty in multi‐parametric MRI‐based glioma segmentation by spherical image projection

Zhenyu Yang,
Kyle Lafata,
Eugene Vaios
et al.

Abstract: BackgroundUncertainty quantification in deep learning is an important research topic. For medical image segmentation, the uncertainty measurements are usually reported as the likelihood that each pixel belongs to the predicted segmentation region. In potential clinical applications, the uncertainty result reflects the algorithm's robustness and supports the confidence and trust of the segmentation result when the ground‐truth result is absent. For commonly studied deep learning models, novel methods for quanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 36 publications
0
1
0
Order By: Relevance
“…One major issue of currently available deep learning models is the lack of model explainability, that is, the extent to which the internal mechanics of a deep neural network can be explained in human terms from a clinical perspective. The explainability ensures that the networks are driven by (1) deep features that are appropriate for clinical practice and (2) decisions that are clinically defensible 39–43 . Without such model explainability, deep learning algorithms remain a "black box" in implementation.…”
Section: Introductionmentioning
confidence: 99%
“…One major issue of currently available deep learning models is the lack of model explainability, that is, the extent to which the internal mechanics of a deep neural network can be explained in human terms from a clinical perspective. The explainability ensures that the networks are driven by (1) deep features that are appropriate for clinical practice and (2) decisions that are clinically defensible 39–43 . Without such model explainability, deep learning algorithms remain a "black box" in implementation.…”
Section: Introductionmentioning
confidence: 99%