2009
DOI: 10.1007/978-3-642-04667-4_11
|View full text |Cite
|
Sign up to set email alerts
|

Combining Color, Depth, and Motion for Video Segmentation

Abstract: Abstract. This paper presents an innovative method to interpret the content of a video scene using a depth camera. Cameras that provide distance instead of color information are part of a promising young technology but they come with many difficulties: noisy signals, small resolution, and ambiguities, to cite a few.By taking advantage of the robustness to noise of a recent background subtraction algorithm, our method is able to extract useful information from the depth signals. We further enhance the robustnes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0
1

Year Published

2011
2011
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(49 citation statements)
references
References 15 publications
0
48
0
1
Order By: Relevance
“…The overall ranking across categories RC¡ of ¡th method is computed as the mean of the single RM¡ across all the sequences. In the following paragraphs we compare the performance of the following algorithms: the proposed adaptive weighted classifier CL W ; the two weak classifiers CL C and CL D ; the MoG algorithm proposed in [18] M0G RG B-D\ and the binary combinations of the foreground masks obtained by two independent modules based on depth and color data as proposed in [19] (by using MoG) and in [20] (by using ViBe), we refer to these algorithm as M0G Bin and Vibe B ¡". Finally we adapt to the RGBD feature space the neural networks algorithm proposed in [17] (SOM) and the modified MoG algorithm proposed in [14] (MoG Z w).lt has to be noted that no post-processing stages, such as morphological filtering, are applied to the resulting foreground masks.…”
Section: Benchmark Data and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The overall ranking across categories RC¡ of ¡th method is computed as the mean of the single RM¡ across all the sequences. In the following paragraphs we compare the performance of the following algorithms: the proposed adaptive weighted classifier CL W ; the two weak classifiers CL C and CL D ; the MoG algorithm proposed in [18] M0G RG B-D\ and the binary combinations of the foreground masks obtained by two independent modules based on depth and color data as proposed in [19] (by using MoG) and in [20] (by using ViBe), we refer to these algorithm as M0G Bin and Vibe B ¡". Finally we adapt to the RGBD feature space the neural networks algorithm proposed in [17] (SOM) and the modified MoG algorithm proposed in [14] (MoG Z w).lt has to be noted that no post-processing stages, such as morphological filtering, are applied to the resulting foreground masks.…”
Section: Benchmark Data and Resultsmentioning
confidence: 99%
“…In [20], a multi-camera system combines color data and depth data, obtained with a low resolution ToF camera, for video segmentation. The Vibe algorithm [16] is applied independently to the color and the depth data: the obtained foreground masks are then combined with logical operations and then post processed with morphological operations.…”
Section: Foreground/background Segmentation With Depth Datamentioning
confidence: 99%
“…Such work has either used custom made sensors, as e.g. in [4], or more recently, timeof-flight sensors [8,3,5]. Another large body of similar work is stereo rig segmentation [1,20].…”
Section: Related Workmentioning
confidence: 99%
“…As the scene is static, we cannot exploit either background modelling [8] or tracking [3]. Furthermore, purely colour-based segmentation is particularly brittle here, due to small reflectance variations, shadows, and in particular occlusions [7].…”
Section: Related Workmentioning
confidence: 99%
“…More relevant to this work, [3,4] employ depth information in addition to RGB, to reinforce method accuracy. Moreover, [5] utilizes motion information.…”
Section: Introductionmentioning
confidence: 99%