2022
DOI: 10.48550/arxiv.2206.06127
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SyntheX: Scaling Up Learning-based X-ray Image Analysis Through In Silico Experiments

Abstract: Artificial intelligence (AI) now enables automated interpretation of medical images for clinical use. However, AI's potential use for interventional images (versus those involved in triage or diagnosis), such as for guidance during surgery, remains largely untapped. This is because surgical AI systems are currently trained using post hoc analysis of data collected during live surgeries, which has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity, and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Much of this review has focused on 2D imaging modalities for exactly this reason, since the generation of endoscopic, x-ray, or US images allows for hundreds or thousands of training samples to originate from a single patient model [44,97,161]. For example, DRRs vary widely in visual appearance based on the position and orientation of the virtual C-arm, and further techniques such as DR increase the variance of training data to further improve sim-to-real transfer [209,215,216]. However, existing techniques for generating these realistic-looking images rely on 3D patient models based on patient data, including CT, MRI, or prior endoscopic reconstructions, for example.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Much of this review has focused on 2D imaging modalities for exactly this reason, since the generation of endoscopic, x-ray, or US images allows for hundreds or thousands of training samples to originate from a single patient model [44,97,161]. For example, DRRs vary widely in visual appearance based on the position and orientation of the virtual C-arm, and further techniques such as DR increase the variance of training data to further improve sim-to-real transfer [209,215,216]. However, existing techniques for generating these realistic-looking images rely on 3D patient models based on patient data, including CT, MRI, or prior endoscopic reconstructions, for example.…”
Section: Discussionmentioning
confidence: 99%
“…The advantages of DR are precisely proved in Gao et al [216], which shows that the combination of physics-based x-ray synthesis, using DeepDRR, combined with strong DR are comparable to GAN-based domain adaptation, and they outperform GAN-based domain adaptation with conventional DRRs, although this work is not yet peer-reviewed. This is advantageous because the image transformations involved in 'strong DR,' such as image inversion, blurring, warping, and coarse dropout, among others, are computationally inexpensive, whereas GANs require additional training with sufficient real images as an unlabeled reference.…”
Section: Deepdrrmentioning
confidence: 91%
See 1 more Smart Citation