2020
DOI: 10.1609/aaai.v34i04.5750
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Networks for Video-to-Video Domain Adaptation

Abstract: Endoscopic videos from multicentres often have different imaging conditions, e.g., color and illumination, which make the models trained on one domain usually fail to generalize well to another. Domain adaptation is one of the potential solutions to address the problem. However, few of existing works focused on the translation of video-based data. In this work, we propose a novel generative adversarial network (GAN), namely VideoGAN, to transfer the video-based data across different domains. As the frames of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(16 citation statements)
references
References 23 publications
0
16
0
Order By: Relevance
“…applications of the existing frameworks to the domain adaptation of urban scenes, which requires rigorous preservation of image-objects. The VideoGAN [5] can maintain the image-objects, but fail to translate the image-weather. The clouds are clearly observed in its cloudy-to-sunny translated results.…”
Section: Visualization Of Translation Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…applications of the existing frameworks to the domain adaptation of urban scenes, which requires rigorous preservation of image-objects. The VideoGAN [5] can maintain the image-objects, but fail to translate the image-weather. The clouds are clearly observed in its cloudy-to-sunny translated results.…”
Section: Visualization Of Translation Resultsmentioning
confidence: 99%
“…To reduce the workload of manual annotation, we try to address the problem of content distortion in an unsupervised manner. Chen et al [5] implemented a VideoGAN to maintain the style-consistency across a driving video during unsupervised video-to-video translation. Xie et al [28] proposed an unsupervised cross-weather adaptation approach, namely OP-GAN, by integrating a self-supervised module into the CycleGAN.…”
Section: Domain Adaptation Of Urban Scenementioning
confidence: 99%
See 2 more Smart Citations
“…The main idea here is that we use the GAN (generative adversarial networks)-related model to learn the distribution behind the data and then generate new data samples to balance the dataset. GAN is a generative model based on neural network structure, which is widely used in various ields [8,55]. The GAN training strategy is a game between two competing networks: the generator and the discriminator.…”
Section: Data Augmentation For Label Imbalancementioning
confidence: 99%