2021
DOI: 10.1007/s00138-021-01246-x
|View full text |Cite
|
Sign up to set email alerts
|

FPANet: Feature-enhanced position attention network for semantic segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Zhang et al 45 first proposed to apply the self-attention module to the GAN and also discussed which layer of the network the attention mechanism is placed in to help obtain better results. FPANet 46 designed a feature integration module to form a feature-enhanced location attention module, which enhances the discrimination of features. Chen et al 47 proposed a self-attention mechanism for single-image generation against networks, and discussed the changes of the model when the self-attention mechanism is placed in different positions of the generator.…”
Section: Self-attention Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhang et al 45 first proposed to apply the self-attention module to the GAN and also discussed which layer of the network the attention mechanism is placed in to help obtain better results. FPANet 46 designed a feature integration module to form a feature-enhanced location attention module, which enhances the discrimination of features. Chen et al 47 proposed a self-attention mechanism for single-image generation against networks, and discussed the changes of the model when the self-attention mechanism is placed in different positions of the generator.…”
Section: Self-attention Mechanismmentioning
confidence: 99%
“…45 first proposed to apply the self-attention module to the GAN and also discussed which layer of the network the attention mechanism is placed in to help obtain better results. FPANet 46 designed a feature integration module to form a feature-enhanced location attention module, which enhances the discrimination of features. Chen et al.…”
Section: Related Workmentioning
confidence: 99%