2019
DOI: 10.1007/978-3-030-33904-3_12
|View full text |Cite
|
Sign up to set email alerts
|

A Binary Variational Autoencoder for Hashing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(14 citation statements)
references
References 10 publications
0
14
0
Order By: Relevance
“…As in related works [17], we pose hashing as an inference problem, where the objective is to learn a probability distribution q φ (b|x) of the code b ∈ {0, 1} B corresponding to an input pattern x. This framework is based on a generative process involving two steps: (i) choose an entry of the hash table according to some probability distribution p θ (b), and (ii) sample an observation x indexed by that address according to a conditional distribution p θ (x|b).…”
Section: A Generative Modelmentioning
confidence: 90%
See 3 more Smart Citations
“…As in related works [17], we pose hashing as an inference problem, where the objective is to learn a probability distribution q φ (b|x) of the code b ∈ {0, 1} B corresponding to an input pattern x. This framework is based on a generative process involving two steps: (i) choose an entry of the hash table according to some probability distribution p θ (b), and (ii) sample an observation x indexed by that address according to a conditional distribution p θ (x|b).…”
Section: A Generative Modelmentioning
confidence: 90%
“…[3] showed that this fundamental difference between classic and variational autoencoders is relevant for hashing and yields to significantly better results. Later, [17] demonstrated that the use of Bernoulli instead of Gaussian latent variables helps to reduce the quantization loss arising from the use of continuous representations. This idea is also used in [4] and extended to incorporate supervision.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…This reparametrization trick is applied by [25] to learn discrete latent variables. Another approach is based on variational inference [15]. One significant disadvantages of the above mentioned approaches is that the relational structure in the input space is only implicitly retained in latent space.…”
Section: Autoencodersmentioning
confidence: 99%