2018
DOI: 10.48550/arxiv.1805.05361
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…But due to the continuous characteristics of Gaussian random variables, a separate binarization step is required to transform the continuous latent representations into binary codes. To overcome the separate training issue, Bernoulli prior and posterior are then proposed in NASH (Shen et al, 2018). With the recent advances on gradient estimators for discrete random variables, the model successfully circumvents gradient backpropagation issue for discrete variables, and can be trained efficiently in an end-to-end manner.…”
Section: Preliminaries On Generative Hashing For Documentsmentioning
confidence: 99%
See 3 more Smart Citations
“…But due to the continuous characteristics of Gaussian random variables, a separate binarization step is required to transform the continuous latent representations into binary codes. To overcome the separate training issue, Bernoulli prior and posterior are then proposed in NASH (Shen et al, 2018). With the recent advances on gradient estimators for discrete random variables, the model successfully circumvents gradient backpropagation issue for discrete variables, and can be trained efficiently in an end-to-end manner.…”
Section: Preliminaries On Generative Hashing For Documentsmentioning
confidence: 99%
“…However, the two-stage training procedure is prone to undermine the performance. NASH (Shen et al, 2018) tackled this issue by replacing Gaussian prior with Bernoulli in VAE and adopting straight-through to enable end-to-end training. Since then, a lot of methods surged to improve the performance.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Variational Autoencoder VAE [21] is a powerful framework for learning stochastic representations that account for model uncertainty. While its applications have been extensively studied in the context of computer vision and NLP [33,34,45,51], its use in complex network analysis is less widely explored. Existing solutions focused on building VAEs for the generation of a graph, but not the associated contents [22,26,36].…”
Section: Related Workmentioning
confidence: 99%