2021
DOI: 10.1016/j.patrec.2021.10.016
|View full text |Cite
|
Sign up to set email alerts
|

On the benefits of defining vicinal distributions in latent space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…In contrast to transformations made directly to the observed feature space, another approach applies noise to learned representations of the observations, either using stochastic network architectures for the classifier f θ (Huang et al, 2016; Srivastava et al, 2014) or by perturbing observations in a learned latent space before mapping them back to the observed feature space (Liu et al, 2018; Mangla et al, 2020; Yaguchi et al, 2019).…”
Section: Structural Regularizationmentioning
confidence: 99%
“…In contrast to transformations made directly to the observed feature space, another approach applies noise to learned representations of the observations, either using stochastic network architectures for the classifier f θ (Huang et al, 2016; Srivastava et al, 2014) or by perturbing observations in a learned latent space before mapping them back to the observed feature space (Liu et al, 2018; Mangla et al, 2020; Yaguchi et al, 2019).…”
Section: Structural Regularizationmentioning
confidence: 99%