2018 9th Conference on Artificial Intelligence and Robotics and 2nd Asia-Pacific International Symposium 2018
DOI: 10.1109/aiar.2018.8769811
|View full text |Cite
|
Sign up to set email alerts
|

RDCGAN: Unsupervised Representation Learning With Regularized Deep Convolutional Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
441
0
7

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 337 publications
(449 citation statements)
references
References 6 publications
1
441
0
7
Order By: Relevance
“…Because small (negative) value will be induced in the down-sampling model, Leaky ReLU was used in the encoder network (downsampling) to prevent the "dead ReLU" problem. However, for the decoder network (up-sampling), RELU activation function still work well and have the advantages of faster performance and introduce more non-linearity [17], [26]. For the output layer, tanh activation function was implemented to normalized the output in the range of [-1,1] since the range of the brightness/value of each pixel of real image should be within [-1,1].…”
Section: B Pix2pix Gan-based Fd-octmentioning
confidence: 99%
“…Because small (negative) value will be induced in the down-sampling model, Leaky ReLU was used in the encoder network (downsampling) to prevent the "dead ReLU" problem. However, for the decoder network (up-sampling), RELU activation function still work well and have the advantages of faster performance and introduce more non-linearity [17], [26]. For the output layer, tanh activation function was implemented to normalized the output in the range of [-1,1] since the range of the brightness/value of each pixel of real image should be within [-1,1].…”
Section: B Pix2pix Gan-based Fd-octmentioning
confidence: 99%
“…Generative Adversarial Networks with deep convolutional neural network . [16] is the extension of GAN introduced first. DCGAN can maintain stability in the training process and create high resolution images.…”
Section: Dcganmentioning
confidence: 99%
“…During training process, the random noise is given as the input to the generator and the generator with multi deconvolutional neural network produces the sample images looks like real images. Discriminator tries to differentiate generated images and training set of images [16]. DCGAN uses Batch Norm [17] for normalization of extracted data, and Leaky ReLU [18] for preventing dead gradients.…”
Section: Dcganmentioning
confidence: 99%
“…Compared with the histogram of oriented gradient (HOG) [4], scale-invariant feature transform (SIFT) [5], and so on, the convolutional neural network (CNN) has been broadly studied. Many different CNN architectures and methods have been proposed, for example, AlexNet, VGGNet, GAN, ResNet, and SENet [6][7][8][9][10].…”
Section: Introductionmentioning
confidence: 99%
“…The pooling layer can reduce the computation of CNN training and highlight some important features in the image. Some scholars have also designed the up-sampling method based on the characteristics of the pooling layer, which was used to explain CNN [17], such as a generative adversarial network (GAN) [18], and so on [8], [19]. In general, the pooling operations are maximum pooling and average pooling [20].…”
Section: Introductionmentioning
confidence: 99%