2018
DOI: 10.1016/j.neunet.2018.07.016
|View full text |Cite
|
Sign up to set email alerts
|

Graph structured autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Recently, as in many other fields, there has been increased usage of deep learning methods for discovering subspace clusters. Examples of deep learning subspace clustering algorithms include StructAE, which uses deep neural networks [27], SSC, a graph structured autoencoder [28], and SDEC that performs semi-supervised deep embedded clustering [29]. Kelkar et al provide a more complete overview of the wider subspace clustering field in a recent survey [30].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, as in many other fields, there has been increased usage of deep learning methods for discovering subspace clusters. Examples of deep learning subspace clustering algorithms include StructAE, which uses deep neural networks [27], SSC, a graph structured autoencoder [28], and SDEC that performs semi-supervised deep embedded clustering [29]. Kelkar et al provide a more complete overview of the wider subspace clustering field in a recent survey [30].…”
Section: Related Workmentioning
confidence: 99%
“…In many neural network architecture we have used an approach to avoid over fitting in a MLP-SDAE model to know version is to use dropout. In deep learning, the dropout technique way that the layer is both delivered to the hidden layer or visible layer to put off overfitting and decorate the overall performance of the model [30]. An easy technique for overall performance so is that every layer is stored withinside the network with a maintaining probability ρ unbiased of every another unit.…”
Section: The Mlp-sdae Model With Dropoutmentioning
confidence: 99%
“…Furthermore, researchers found the denoising model without deconvolutional layers, which is the transpose of convolutional layers [44], implies that the input and the output of the denoising model may have different sizes. To keep the size of denoised CT images equal to that of the input, U-net architecture are used in denoising 2 Computational and Mathematical Methods in Medicine LDCT images [45][46][47][48][49][50][51]. Shan et al [8] proposed the conveying path-based convolutional U-net denoising model, which is called as CPCE.…”
Section: Related Workmentioning
confidence: 99%