2014
DOI: 10.1049/iet-cvi.2013.0319
|View full text |Cite
|
Sign up to set email alerts
|

Background segmentation of dynamic scenes based on dual model

Abstract: Detecting moving objects from background in video sequences is the first step of many image applications. The background can be divided into two types according to whether the pixel values of it are variable or not: static one and dynamic one. How to correctly detect moving foreground objects from dynamic scenes is a difficult problem because of the similarity between the moving foreground and the variable background. In this study, a new method for non‐parametric background segmentation of dynamic scenes is p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 38 publications
(152 reference statements)
0
7
0
Order By: Relevance
“…There are several common factors which affect indoor segmentation efficacy: 1) image noise, 2) camera jitter and movement, 3) automatic camera settings, 4) illumination and shadows, 5) background initialization, 6) color camouflage, and 7) ghost images, sleeping foregrounds, and dynamic backgrounds [10] , [11] , [12] .…”
Section: Experimental Design Materials and Methodsmentioning
confidence: 99%
“…There are several common factors which affect indoor segmentation efficacy: 1) image noise, 2) camera jitter and movement, 3) automatic camera settings, 4) illumination and shadows, 5) background initialization, 6) color camouflage, and 7) ghost images, sleeping foregrounds, and dynamic backgrounds [10] , [11] , [12] .…”
Section: Experimental Design Materials and Methodsmentioning
confidence: 99%
“…Some methods create more than one background model, e.g. a dual model in [5] and a multi-model in [3] and [4]. The former has two models; a self-model for the background of its pixel location, a neighbourhood-model for neighbour pixels around itself.…”
Section: Related Workmentioning
confidence: 99%
“…After colour images are captured by the camera and changed to their greyscale forms, a background modelling is built, and according to the principle of consistency of time, a short image sequence is used to generate the initial background model B x , y 0 as follows: B x , y 0 = }{I 1 )(x , y , , I 1 + )(N 2 × K )(x , y , I 1 + )(N 1 × K )(x , y , where N is the number of observed image ALTF in the background model and K is the specified frame interval of taking a sample from the video, I 1 )(x , y is the ALTF sample at a location false( x , y false) in the first frame, and I 1 + )(N 1 × K )(x , y is the ALTF sample in the 1 + )(N 1 × K th frame. To avoid the generation of a ghost, the Pixel‐Based Adaptive Segmentation [32] and dual sample consensus model [33] initialise the background model with the image values at each pixel in the first N frames. However, those methods also lead to ghost in urban traffic scenes due to slow‐moving or temporarily stopped vehicles.…”
Section: Texture‐based Background Modelling Using Sample Consensusmentioning
confidence: 99%