2021
DOI: 10.48550/arxiv.2110.14587
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boundary Guided Context Aggregation for Semantic Segmentation

Abstract: The recent studies on semantic segmentation are starting to notice the significance of the boundary information, where most approaches see boundaries as the supplement of semantic details. However, simply combing boundaries and the mainstream features cannot ensure a holistic improvement of semantics modeling. In contrast to the previous studies, we exploit boundary as a significant guidance for context aggregation to promote the overall semantic understanding of an image. To this end, we propose a Boundary gu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…Through CAAM, the model not only identifies fine structures and patterns within the image more accurately but also maintains high performance in challenging scenarios [43]. The introduction of this approach highlights the paramount importance of intelligently aggregating contextual information in image processing.…”
Section: Context Aggregation Attention Mechanismmentioning
confidence: 99%
“…Through CAAM, the model not only identifies fine structures and patterns within the image more accurately but also maintains high performance in challenging scenarios [43]. The introduction of this approach highlights the paramount importance of intelligently aggregating contextual information in image processing.…”
Section: Context Aggregation Attention Mechanismmentioning
confidence: 99%
“…The self-attention mechanism works by calculating the relative importance and establishing an association between one pixel and all the other pixels, rather than just relying on elements in adjacent positions, which aids in effectively capturing the long-term dependencies between pixels [50]. The multihead attention mechanism is developed on the basis of self-attention, which enhances the expressiveness and generalization ability of the model [51]. The channel attention mechanism operates by assessing the importance of each channel, and it generates more representative features.…”
Section: Feature Fusion Module Based On Attention Mechanismmentioning
confidence: 99%
“…Considering the advantages of the attention mechanism, and inspired by the multihead feed-forward transfer attention module [29] and the boundary-guided context aggregation module [51]-which utilizes the multihead attention mechanism to fuse the feature maps of different convolution layers-we have designed the feature fusion module based on the attention mechanism (FFMAM) to promote the mutual guidance of two branches, integrate the features extracted, and explore the relationship between the channels. As the left of Figure 5 shows, the feature tensor X ∈ R C×H×W , derived from the convolution branch, is used to generate the key vector (Key) and the value vector (Value) through different reshape modes; meanwhile, the feature tensor Y ∈ R C×H×W , acquired from the Swin transformer branch, is employed for generating the query vector (Query).…”
Section: Feature Fusion Module Based On Attention Mechanismmentioning
confidence: 99%
“…Liang et al (2021) enabled the network to comprehensively capture local and global features of indoor building structure point clouds through a newly designed multi-level hierarchical feature decoding method, which enables automatic and efficient semantic segmentation. Ma et al (2021) proposed a boundary-guided contextual aggregation network by designing boundary extractors for accurate boundary detection. This contextual aggregation approach helps to capture the long-range correlation between pixels in the boundary region and pixels inside the object to improve intra-class consistency.…”
Section: Related Workmentioning
confidence: 99%