2013
DOI: 10.1007/978-3-642-41914-0_22
|View full text |Cite
|
Sign up to set email alerts
|

Foreground Detection with a Moving RGBD Camera

Abstract: Abstract. A method for foreground detection in data acquired by a moving RGBD camera is proposed. The background scene is initially in a reference model. An initial estimation of camera motion is provided by a conventional point cloud registration approach of matched keypoints between the captured scene and the reference model. This initial solution is then refined based on a top-down, model based approach that evaluates candidate camera poses in a Particle Swarm Optimization framework. To evaluate a candidate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…In their work, Koutlemanis et al [31] start with a reference image comprised solely of the background, whereas in our method we do not record the background separately, rather both the foreground and the background are always visible, a more challenging scenario. Following [31], we estimate SIFT matching between two RGB frames Ri, and Ri+1, where i is the frame number ranging from 0N1. The motion characteristics of the foreground and the background vary considerably as shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…In their work, Koutlemanis et al [31] start with a reference image comprised solely of the background, whereas in our method we do not record the background separately, rather both the foreground and the background are always visible, a more challenging scenario. Following [31], we estimate SIFT matching between two RGB frames Ri, and Ri+1, where i is the frame number ranging from 0N1. The motion characteristics of the foreground and the background vary considerably as shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…After the registration, the 3D point cloud is back‐projected to the depth space Di. Similar to [31], an adaptive thresholding classification C ( p ) is used to classify every point p as either the background (0) or foreground (1): Cfalse(pfalse)={1em4pt0,iffalsefalse|Difalse(pfalse)Difalse(pfalse)falsefalse|Difalse(pfalse)0.011,iffalsefalse|Difalse(pfalse)Difalse(pfalse)falsefalse|>Difalse(pfalse)0.01 This classification is performed at pixel level in the depth space for each p . Thus, if the back‐projected depth value for each point, Difalse(pfalse), of the registered point cloud deviates by more than 1% from the depth value Difalse(pfalse), it is classified as a foreground.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations