2018
DOI: 10.3389/fninf.2018.00082
|View full text |Cite
|
Sign up to set email alerts
|

Deep Synthesis of Realistic Medical Images: A Novel Tool in Clinical Research and Training

Abstract: Making clinical decisions based on medical images is fundamentally an exercise in statistical decision-making. This is because in this case, the decision-maker must distinguish between image features that are clinically diagnostic (i.e., signal) from a large amount of non-diagnostic features. (i.e., noise). To perform this task, the decision-maker must have learned the underlying statistical distributions of the signal and noise to begin with. The same is true for machine learning algorithms that perform a giv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 48 publications
0
6
0
Order By: Relevance
“…We therefore used PVMs rather than whole breast mammograms in this experiment. 51 (c) The dissimilarity rating paradigm used in exp. 4.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We therefore used PVMs rather than whole breast mammograms in this experiment. 51 (c) The dissimilarity rating paradigm used in exp. 4.…”
Section: Resultsmentioning
confidence: 99%
“…The stimulus set consisted of a total of 32 partial view mammograms (PVMs) generated by digitally clipping the mammogram so as to fully encompass the radiologically vetted ROI, as we have described in detail before in Refs. 51 and 52 [also see Figs. 8(a) and 8(b) ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For one thing, it is self-evidently true that the complexity of medical images is much higher compared to the complexity of our images. For one thing, medical images have complex global information (or, in machine learning terms, ‘image grammar’ [ 45 , 46 ]) that our camouflage images lack.…”
Section: Discussionmentioning
confidence: 99%
“…Three independent radiation oncologists who had not taken part in initial contour creation were asked to rate every lymph node level segmentation in the test set on a continuous scale from 0 to 100, guided by four categories (0 – 25: complete recontouring of segmentation necessary, 26 – 50: major manual editing necessary, 51 – 75: minor manual editing necessary, > 75: segmentation clinically usable). The use of a continuous 100-point scale was based on our previous experience that a 4-point Likert scale was insufficient to assess subtle differences in expert judgement of segmentation quality and has similarly been employed by other researchers evaluating expert rating of deep learning predictions ( 35 , 36 ). The raters were instructed that they would be presented with 60 planning CT datasets with H&N lymph node level segmentations which had been created by human experts or by a deep learning autosegmentation model.…”
Section: Methodsmentioning
confidence: 99%