2019 IEEE International Conference on Multimedia and Expo (ICME) 2019
DOI: 10.1109/icme.2019.00263
|View full text |Cite
|
Sign up to set email alerts
|

Context-Constrained Accurate Contour Extraction for Occlusion Edge Detection

Abstract: Occlusion edge detection requires both accurate locations and context constraints of the contour. Existing CNN-based pipeline does not utilize adaptive methods to filter the noise introduced by low-level features. To address this dilemma, we propose a novel Context-constrained accurate Contour Extraction Network (CCENet). Spatial details are retained and contour-sensitive context is augmented through two extraction blocks, respectively. Then, an elaborately designed fusion module is available to integrate feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…Specifically, for the edge path (see Sec.3.1), a structure similar to [15] is employed to extract consistent and accurate occlusion edge, which is fundamental for occlusion reasoning. For the orientation path (see Sec.3.2), to learn more sufficient cues near the boundary for occlusion reasoning, the high-level bilateral feature is obtained, and a Multi-rate Context Learner (MCL) is proposed to extract the feature (see Sec.3.2.1).…”
Section: Ofnetmentioning
confidence: 99%
See 2 more Smart Citations
“…Specifically, for the edge path (see Sec.3.1), a structure similar to [15] is employed to extract consistent and accurate occlusion edge, which is fundamental for occlusion reasoning. For the orientation path (see Sec.3.2), to learn more sufficient cues near the boundary for occlusion reasoning, the high-level bilateral feature is obtained, and a Multi-rate Context Learner (MCL) is proposed to extract the feature (see Sec.3.2.1).…”
Section: Ofnetmentioning
confidence: 99%
“…We adopt the module proposed in [15], which has a high capability to capture accurate location cue and sensitive perception of the entire object. In [15], the low-level cue from the first three side-outputs preserves the original size of the input image and encodes abundant spatial information. Without losing resolution, the large receptive field is achieved via dilated convolution [35] after res50 [9].…”
Section: Edge Pathmentioning
confidence: 99%
See 1 more Smart Citation
“…In our network, as shown in Fig. 2, we use ResNet [6] as a base feature extraction model as in previous works [18,11,10]. According to the size of the feature maps, this model is divided into five stages (see Fig.…”
Section: Dual-path Decoder Networkmentioning
confidence: 99%