Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
DOI: 10.1109/cvpr.1999.784721
|View full text |Cite
|
Sign up to set email alerts
|

Background estimation and removal based on range and color

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
83
0
1

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 116 publications
(84 citation statements)
references
References 9 publications
0
83
0
1
Order By: Relevance
“…The overall ranking across categories RC¡ of ¡th method is computed as the mean of the single RM¡ across all the sequences. In the following paragraphs we compare the performance of the following algorithms: the proposed adaptive weighted classifier CL W ; the two weak classifiers CL C and CL D ; the MoG algorithm proposed in [18] M0G RG B-D\ and the binary combinations of the foreground masks obtained by two independent modules based on depth and color data as proposed in [19] (by using MoG) and in [20] (by using ViBe), we refer to these algorithm as M0G Bin and Vibe B ¡". Finally we adapt to the RGBD feature space the neural networks algorithm proposed in [17] (SOM) and the modified MoG algorithm proposed in [14] (MoG Z w).lt has to be noted that no post-processing stages, such as morphological filtering, are applied to the resulting foreground masks.…”
Section: Benchmark Data and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The overall ranking across categories RC¡ of ¡th method is computed as the mean of the single RM¡ across all the sequences. In the following paragraphs we compare the performance of the following algorithms: the proposed adaptive weighted classifier CL W ; the two weak classifiers CL C and CL D ; the MoG algorithm proposed in [18] M0G RG B-D\ and the binary combinations of the foreground masks obtained by two independent modules based on depth and color data as proposed in [19] (by using MoG) and in [20] (by using ViBe), we refer to these algorithm as M0G Bin and Vibe B ¡". Finally we adapt to the RGBD feature space the neural networks algorithm proposed in [17] (SOM) and the modified MoG algorithm proposed in [14] (MoG Z w).lt has to be noted that no post-processing stages, such as morphological filtering, are applied to the resulting foreground masks.…”
Section: Benchmark Data and Resultsmentioning
confidence: 99%
“…One of the first proposals based on both color and depth data is presented in [18]; it is an adaptation of the MoG algorithm to color and depth data obtained with a stereo device. Each background pixel is modeled as a mixture of four dimensional Gaussian distributions: three components are the color data (YUV space in this case) and the fourth one is the depth data, D. Color and depth features are considered independent and the same updating strategy of the original MoG algorithm is used to update the distribution parameters.…”
Section: Foreground/background Segmentation With Depth Datamentioning
confidence: 99%
“…Different extensions of this model were developed by changing the characteristics at pixel level. Gordon et al 11 represent each pixel with four components: the three color components and the depth.…”
Section: Parametric Methodsmentioning
confidence: 99%
“…However, as mentioned before, temporal inconsistency in static parts of the scene leads to salient and disturbing artifacts. This was addressed in the context of video-conferencing by algorithms that segment stereoscopic video into layers [13,20] utilizing both depth and color information. The intended application, however, is background replacement and not free viewpoint video.…”
Section: Related Workmentioning
confidence: 99%