2021
DOI: 10.48550/arxiv.2110.06400
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…The adversarial training forces the generator model to predict more realistic segmentation out- comes. In addition to vanilla-GAN, we have also utilized the CycleGAN [17], [13] approach to investigate the effect of cycle consistency constraint on the segmentation task. Figure 1 demonstrates the general overview of the proposed method.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The adversarial training forces the generator model to predict more realistic segmentation out- comes. In addition to vanilla-GAN, we have also utilized the CycleGAN [17], [13] approach to investigate the effect of cycle consistency constraint on the segmentation task. Figure 1 demonstrates the general overview of the proposed method.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…To embed transformers within the CycleGAN, we utilized the encoder-decoder style convolutional transformer model [13]. The premise behind this idea was that the encoder module takes the input image and decreases the spatial dimensions while extracting features with convolution layers.…”
Section: Transformer Based Cycleganmentioning
confidence: 99%
“…In our work, we aim to benefit from the modeling power of transformers while being able to process reasonable downsampled ocean SAR images, we adopt a generative convolutional transformer with a manageable number of parameters called CyTran [31]. We used it in an unsupervised set-up, showing that, using SD as a preprocessing stage we improve the SAR image descriptors, leading to an important precision boost for image retrieval.…”
Section: Transformer Modelsmentioning
confidence: 99%
“…The downsampling block is followed by the convolutional transformer block, which provides an output tensor of the same size as the input tensor. The convolutional transformer block is inspired by the block proposed in [31]. More precisely, the input tensor is interpreted as a set of overlapping visual tokens.…”
Section: Unsupervised Neural Networkmentioning
confidence: 99%
“…Moreover, using transformers has been shown to be more promising in computer vision (Dosovitskiy et al, 2020; for utilizing long-range dependencies than other, traditional CNN-based methods. In parallel, transformers with powerful global relation modeling abilities have become the standard starting point for training on a wide range of downstream medical imaging analysis tasks, such as image segmentation Cao et al, 2021;Wang et al, 2021b;Valanarasu et al, 2021;Xie et al, 2021b), image synthesis (Kong et al, 2021;Ristea et al, 2021;Dalmaz et al, 2021), and image enhancement (Korkmaz et al, 2021;Luthra et al, 2021;Wang et al, 2021a).…”
Section: Introductionmentioning
confidence: 99%