2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP) 2018
DOI: 10.1109/ivmspw.2018.8448850
|View full text |Cite
|
Sign up to set email alerts
|

Attention-Aware Generative Adversarial Networks (ATA-GANs)

Abstract: In this work, we present a novel approach for training Generative Adversarial Networks (GANs). Using the attention maps produced by a Teacher-Network we are able to improve the quality of the generated images as well as perform weakly object localization on the generated images. To this end, we generate images of HEp-2 cells captured with Indirect Imunofluoresence (IIF) and study the ability of our network to perform a weakly localization of the cell. Firstly, we demonstrate that whilst GANs can learn the mapp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(29 citation statements)
references
References 11 publications
0
29
0
Order By: Relevance
“…Later, the attention mechanism was introduced to GAN for image conversion. In [24], authors used Resnet-18 as a teacher network to train the discriminator of the GAN where the teacher taught the discriminator where to focus on the generated image. In [25], researchers proposed a model with attention GAN for image-to-image translation.…”
Section: Image-to-image Translation Using Gansmentioning
confidence: 99%
See 1 more Smart Citation
“…Later, the attention mechanism was introduced to GAN for image conversion. In [24], authors used Resnet-18 as a teacher network to train the discriminator of the GAN where the teacher taught the discriminator where to focus on the generated image. In [25], researchers proposed a model with attention GAN for image-to-image translation.…”
Section: Image-to-image Translation Using Gansmentioning
confidence: 99%
“…Figure 4 shows the structure of the discriminator, which is a patch-based discriminator introduced in [51], and we modified it by following [25]. In [24], authors used ResNet-18 [52] as a teacher network to generate attention maps to teach the discriminators where to focus. Inspired by [24], we use ResNet-18 as a teacher network in our model to train the generators where to focus.…”
Section: Architecture Of the Proposed Modelmentioning
confidence: 99%
“…They used a U-net generator based on a modified framework that combines both pix2pix [40] and ACGAN [41] with a transferlearning technique to boost the segmentation performance across different cell modalities. In the field of HEp-2 cell image synthesis, Kastaniotis et al [42] proposed using a Teacher-network to guide the attention maps in the discriminator hidden layers in DCGAN [10] framework to improve the quality of the generated HEp-2 cell images. However, no evaluation measures have been applied in their study.…”
Section: Gans For Hep-2 Image Classificationmentioning
confidence: 99%
“…To attain this, they proposed a Teacher-Student training model as well as a unique kind of Soft-Class Activation maps. This scheme permits the discriminator to create weak annotation of the produced images which can be used for automatic annotation of produced images [7].…”
Section: Related Workmentioning
confidence: 99%