2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00865
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time High-Resolution Background Matting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
91
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 186 publications
(118 citation statements)
references
References 27 publications
0
91
0
Order By: Relevance
“…The captured RGB resolution is 2560 × 1440 and depth image resolution 1024 × 1024. During testing, we use background-matting-v2 [33] to obtain the mask M of body portraits and then use RetinaFace [9] to detect the front face for the front-view. It takes about 3.57s for our model (32-bit floating-point precision) to predict the occupancy field of resolution 256 with the given inputs.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The captured RGB resolution is 2560 × 1440 and depth image resolution 1024 × 1024. During testing, we use background-matting-v2 [33] to obtain the mask M of body portraits and then use RetinaFace [9] to detect the front face for the front-view. It takes about 3.57s for our model (32-bit floating-point precision) to predict the occupancy field of resolution 256 with the given inputs.…”
Section: Methodsmentioning
confidence: 99%
“…3. For clarity, we denote RGB body portrait as I, the corresponding binary body mask as M obtained by the background-matting method [33], D raw as the normalized result of the captured depth map, and the refined depth map as D rf , where D rf = DRM(D raw , I). The architecture with two different types of inputs, i.e., I and D rf , is inspired by the hypercolumn structure in [20] and DDR-Net [60].…”
Section: Depth Refinementmentioning
confidence: 99%
“…Due to the nature of relying on low-level color cues, their assumptions are easily violated in complex images. To overcome this dilemma, deep matting methods [1,4,11,14,18,19,22,28,29,34,36] appeared with the development of deep learning.…”
Section: Related Workmentioning
confidence: 99%
“…The input videos contain background, which we do not want to reconstruct. We obtain foreground segmentations for all input images via image matting [26] together with a hard brightness threshold. During training, we use a background loss L background to discourage geom-etry along rays of background pixels.…”
Section: D Reconstructionmentioning
confidence: 99%