2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.41
|View full text |Cite
|
Sign up to set email alerts
|

Deep Image Matting

Abstract: Image matting is a fundamental computer vision problem and has many applications. Previous algorithms have poor performance when an image has similar foreground and background colors or complicated textures. The main reasons are prior methods 1) only use low-level features and 2) lack high-level context. In this paper, we propose a novel deep learning based algorithm that can tackle both these problems. Our deep model has two parts. The first part is a deep convolutional encoder-decoder network that takes an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
757
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 471 publications
(814 citation statements)
references
References 31 publications
(97 reference statements)
2
757
0
Order By: Relevance
“…We experiment with our methods on the synthetic Composition-1K dataset and a real-world matting image dataset, both of which are provided by Xu et al [52]. As discussed in Section 3.2, our neural networks are all trained on the synthetic Composition-1K training set.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We experiment with our methods on the synthetic Composition-1K dataset and a real-world matting image dataset, both of which are provided by Xu et al [52]. As discussed in Section 3.2, our neural networks are all trained on the synthetic Composition-1K training set.…”
Section: Methodsmentioning
confidence: 99%
“…They were generated by compositing 50 unique foreground images onto each of the 20 images from the PASCAL VOC 2012 dataset [15]. We used the code provided by Xu et al [52] to generate these testing images. The real world image dataset contains 31 real world images pulled from the internet [52].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In these cases the cost of pixel-level annotation can be reduced by automating a portion of the task. Matting and object selection [50,33,34,6,58,57,10,30,59] generate tight boundaries from loosely annotated boundaries or few inside/outside clicks and scribbles. [44,38] introduced a predictive method which automatically infers a foreground mask from 4 boundary clicks, and was extended to full-image segmentation in [2].…”
Section: Related Workmentioning
confidence: 99%