2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00385
|View full text |Cite
|
Sign up to set email alerts
|

BANet: Bidirectional Aggregation Network With Occlusion Handling for Panoptic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(37 citation statements)
references
References 31 publications
0
35
0
Order By: Relevance
“…Panoptic segmentation is first introduced by Kirillov et al [23], which treated countable instance things and uncountable stuff as one visual recognition task [22,23,44]. Chen et al [6] improved the panoptic segmentation quality with a bidirectional path between the semantic and instance segmentation branches. Wu et al [43] constructed modular graph structure to reason their relations.…”
Section: Related Workmentioning
confidence: 99%
“…Panoptic segmentation is first introduced by Kirillov et al [23], which treated countable instance things and uncountable stuff as one visual recognition task [22,23,44]. Chen et al [6] improved the panoptic segmentation quality with a bidirectional path between the semantic and instance segmentation branches. Wu et al [43] constructed modular graph structure to reason their relations.…”
Section: Related Workmentioning
confidence: 99%
“…From the view of instance representation, previous work mainly formats things and stuff from different perspectives. Foreground things are usually separated and represented with boxes [2], [31], [32], [33] or aggregated according to center offsets [5], while background stuff is often predicted with a parallel FCN [8] branch. Although methods of [6], [23] represent things and stuff uniformly, the inherent ambiguity cannot be resolved well merely with the pixel-level affinity, which yields the performance drop in complex scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…The earlier work [24] directly combines predictions of things and stuff from different models. To alleviate computation cost, many previous works [7,23,27,30,41,61,65] are proposed to model both stuff segmentation and thing segmentation in one model with different task heads. Detection based methods [20,23,31,42,62] usually represent thing within the box prediction while several bottom-up models [8,14,53,64] perform grouping instance via the pixel-level affinities or instance centers from semantic segmentation results.…”
Section: Related Workmentioning
confidence: 99%