2021
DOI: 10.1016/j.eswa.2021.115090
|View full text |Cite
|
Sign up to set email alerts
|

DSANet: Dilated spatial attention for real-time semantic segmentation in urban street scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 79 publications
(18 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…Various kinds of deep learning models [4,5,11,33,40] have been proposed to resolve visual applications with multi-task learning setups such as semantic segmentation, and depth estimation, efficiently. Khattar et al [16] propose a multi-task learning framework where domain-agnostic features are learned to improve the model performance on both object detection and saliency prediction tasks with limited data.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…Various kinds of deep learning models [4,5,11,33,40] have been proposed to resolve visual applications with multi-task learning setups such as semantic segmentation, and depth estimation, efficiently. Khattar et al [16] propose a multi-task learning framework where domain-agnostic features are learned to improve the model performance on both object detection and saliency prediction tasks with limited data.…”
Section: Motivation and Challengesmentioning
confidence: 99%
“…Feature Pyramid Network [18] aggregates the multi-scale feature maps in a top-down fashion with progressive upsampling. Other network such as BiSeNet [19], ContextNet [20], GUN [21], and DSANet [22] utilized detail branch to capture low-level details in shallow layers.…”
Section: B Multi-scale and Context Aggregationmentioning
confidence: 99%
“…Work [52] designs a lightweight dual attention structure that uses separable convolutions to simplify attention modeling between the spatial dimension and the channel dimension. The attention module introduced by DSANet [53] is also based on the dual attention structure, with a dilated expanded spatial attention module and a dilated channel attention module. In this paper, we propose an Swap Attention Module which is a spatial attention module that can effectively fuse the features of spatial detail branches and semantic branches.…”
Section: Related Workmentioning
confidence: 99%