2020
DOI: 10.1007/s11548-020-02159-2
|View full text |Cite
|
Sign up to set email alerts
|

pix2xray: converting RGB images into X-rays using generative adversarial networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…In another study [36], to generate new modalities, Haiderbhai et al introduced a novel architecture based on the pix2pix model. They proposed a method of synthetic X-ray generation using conditional generative adversarial networks and created triplets for X-ray, pose, and RGB images of natural hand poses sampled from the NYU hand pose dataset .…”
Section: Image-to-image Translationmentioning
confidence: 99%
“…In another study [36], to generate new modalities, Haiderbhai et al introduced a novel architecture based on the pix2pix model. They proposed a method of synthetic X-ray generation using conditional generative adversarial networks and created triplets for X-ray, pose, and RGB images of natural hand poses sampled from the NYU hand pose dataset .…”
Section: Image-to-image Translationmentioning
confidence: 99%
“…Alternatively, fast analytical simulations can also be used to generate training data in a controlled manner. For example, Haiderbhai et al [8] proposed a method based on a generative adversarial network (GAN), a machine learning approach that can be used to create synthetic images. Images simulated using gVirtualXRay, the same X-ray generator as in our registration framework, are used to create a large database to train the GAN.…”
Section: Context: Artefacts In Ctmentioning
confidence: 99%