2022
DOI: 10.1016/j.knosys.2022.109065
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised domain adaptation with Joint Adversarial Variational AutoEncoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 36 publications
0
3
0
Order By: Relevance
“…The parameters in the objective are learned through RMSProp, which is an adaptive learning rate method that divides the learning rate by an exponentially decaying average of squared gradients. We choose RMSProp as the optimizer because it is a popular and effective method to determine the learning rate abortively which is widely used for training adversarial neural networks (Dou et al, 2019 ; Li et al, 2022 ; Zhou and Pan, 2022 ). Adam is another widely adopted optimizer that extends RMSProp with momentum terms, however, the momentum terms may make Adam unstable (Mao et al, 2017 ; Luo et al, 2018 ; Clavijo et al, 2021 ).…”
Section: F Air Da: Fair Classification With Domain...mentioning
confidence: 99%
“…The parameters in the objective are learned through RMSProp, which is an adaptive learning rate method that divides the learning rate by an exponentially decaying average of squared gradients. We choose RMSProp as the optimizer because it is a popular and effective method to determine the learning rate abortively which is widely used for training adversarial neural networks (Dou et al, 2019 ; Li et al, 2022 ; Zhou and Pan, 2022 ). Adam is another widely adopted optimizer that extends RMSProp with momentum terms, however, the momentum terms may make Adam unstable (Mao et al, 2017 ; Luo et al, 2018 ; Clavijo et al, 2021 ).…”
Section: F Air Da: Fair Classification With Domain...mentioning
confidence: 99%
“…In addition, there is also a domain adaptation approach [27] using generative models such as CycleGAN [28] to reduce the distance between source and target domain samples in the feature space. The unsupervised domain adaptation methods [29, 30] based on autoencoder also obtains good results. The deep learning method by autoencoder acquires different data features to improve the domain migration effect.…”
Section: Related Workmentioning
confidence: 99%
“…For example, in JMVAE [17], the authors project two different views to a shared latent space in a bidirectional double VAE. Whereas, in JVA 2 E [18], they prefer to combine adversarial learning with the generative ability of the VAE's latent features. Other works, such as MVAE [19], combine K input views by having an individual encoder per view but sharing parameters between them and, later, create a common latent space representation by a Gaussian PoE given the private representations.…”
Section: Introductionmentioning
confidence: 99%