2018
DOI: 10.1007/978-3-030-01240-3_17
|View full text |Cite
|
Sign up to set email alerts
|

PSANet: Point-wise Spatial Attention Network for Scene Parsing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
547
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,005 publications
(597 citation statements)
references
References 34 publications
1
547
0
Order By: Relevance
“…3.2, APNB is much more efficient than a standard non-local block. We hereby give a quantitative efficiency comparison between our APNB and a generic non-local block in the following aspects: GFLOPs, GPU memory (MB) and GPU computation time (ms Besides the comparison of the single block efficiency, we also provide the whole network efficiency comparison with the two most advanced methods, PSANet [48] and DenseA-SPP [36], in terms of inference time (s), GPU occupation with batch size set to 1 (MB) and the number of parameters (Million). According to Tab.…”
Section: Efficiency Comparison With Non-local Blockmentioning
confidence: 99%
“…3.2, APNB is much more efficient than a standard non-local block. We hereby give a quantitative efficiency comparison between our APNB and a generic non-local block in the following aspects: GFLOPs, GPU memory (MB) and GPU computation time (ms Besides the comparison of the single block efficiency, we also provide the whole network efficiency comparison with the two most advanced methods, PSANet [48] and DenseA-SPP [36], in terms of inference time (s), GPU occupation with batch size set to 1 (MB) and the number of parameters (Million). According to Tab.…”
Section: Efficiency Comparison With Non-local Blockmentioning
confidence: 99%
“…The results and comparison are illustrated in Table 3. ACFNet, which uses only train-fine data, outperforms previous work PSANet [48] for about 2.2% and even better than most methods that also employ the validation set for training. While using both train-fine and val-fine data for training, ACFNet outperforms the previous methods [41,48,43,42] for a large margin and achieves new stateof-the-art of 81.85% mIoU.…”
Section: Comparing With the State-of-the-artmentioning
confidence: 86%
“…Besides, our special thanks go to Yuchen Sun, Xueyu Song, Ru Zhang, Yuhui Yuan and the anonymous reviewers for the discussion and their helpful advice. road sidewalk building wall fence pole traffic light traffic sign vegetation terrain sky person rider car truck bus train motorcycle bicycle PSPNet †[47] 78.4 -------------------PSANet † [48] 78 .6 ------------------ [42] 78.9 -…”
Section: Acknowledgmentmentioning
confidence: 99%
“…Deeplab-v2 [9] 70.4 RefineNet-Res101 [43] 73.6 DSSPN-Universal [41] 76.6 GCN [56] 76.9 DepthSet [35] 78.2 PSPNet [75] 78.4 AAF [34] 79.1 DFN [72] 79.3 PSANet [76] 80.1 DenseASPP-DenseNet161 [71] 80.6…”
Section: Methods Mean Ioumentioning
confidence: 99%