2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00747
|View full text |Cite
|
Sign up to set email alerts
|

Context Encoding for Semantic Segmentation

Abstract: Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding M… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
954
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 1,356 publications
(955 citation statements)
references
References 62 publications
(136 reference statements)
0
954
0
1
Order By: Relevance
“…Following previous works [16,23], we choose the most frequent 59 classes plus one background class (i.e., 60 classes in total) in our experiments. There is not a test server available and therefore we follow previous works [16,34,3,21,37] to report our result on val set.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Following previous works [16,23], we choose the most frequent 59 classes plus one background class (i.e., 60 classes in total) in our experiments. There is not a test server available and therefore we follow previous works [16,34,3,21,37] to report our result on val set.…”
Section: Methodsmentioning
confidence: 99%
“…Finally, we compare our proposed decoder with the vanilla bilinear decoder on the Cityscapes val set. Follow-Method mIOU (%) PSPNet [36] 85.4 DeepLabv3 [4] 85.7 EncNet [34] 85.9 DFN [32] 86.2 IDW-CNN [27] 86.3 CASIA IVA SDN [9] 86.6 DIS [22] 86.8 DeepLabv3+ [5] (Xception-65) 87.8 Our proposed (Xception-65) 88.1 ing [5], Xception-71 is used as our backbone and the number of iterations is increased to 90k with a initial learning rate being 0.01. As shown in Table 4, under the same training and testing settings, our proposed decoder achieves a comparable performance with the vanilla one while using much less computation.…”
Section: Comparison With the Vanilla Bilinear Decodermentioning
confidence: 99%
See 3 more Smart Citations