2013
DOI: 10.1007/978-3-642-37431-9_38
|View full text |Cite
|
Sign up to set email alerts
|

A New Framework for Background Subtraction Using Multiple Cues

Abstract: Abstract. In this work, to effectively detect moving objects in a fixed camera scene, we propose a novel background subtraction framework employing diverse cues: pixel texture, pixel color and region appearance. The texture information of the scene is clustered by the conventional codebook based background modeling technique, and utilized to detect initial foreground regions. In this process, we employ a new texture operator namely, scene adaptive local binary pattern (SALBP) that provides more consistent and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 81 publications
(55 citation statements)
references
References 25 publications
0
55
0
Order By: Relevance
“…After the SRMs are built, we use a semi-automated algorithm for dataset generation [5] whereby training samples are effectively collected from a scene with minimal human intervention by using background subtraction [20]. Next, to assign the gathered samples to SRMs (Fig.…”
Section: Constructing Training-sets For Each Local Regionmentioning
confidence: 99%
See 1 more Smart Citation
“…After the SRMs are built, we use a semi-automated algorithm for dataset generation [5] whereby training samples are effectively collected from a scene with minimal human intervention by using background subtraction [20]. Next, to assign the gathered samples to SRMs (Fig.…”
Section: Constructing Training-sets For Each Local Regionmentioning
confidence: 99%
“…To improve computational efficiency, we first extract ROIs in a frame using background subtraction [20] and then apply a search window to each ROI. More explicitly, for a given ROI F , we first find the most spatially closely related SRM by:…”
Section: Initial Detectionmentioning
confidence: 99%
“…This allows LBSP to capture both texture and intensity changes. Noh and Jeon (2012) propose to improve the SILTP (Liao et al, 2010) thanks to a codebook method. The derived descriptor gain in robustness when segmenting moving objects from dynamic and complex backgrounds.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, concluding remarks and some perspectives are drawn in Section 5. (Ojala et al, 2002) • 256 Modified LBP (Heikkilä and Pietikäinen, 2006) • • 256 CS-LBP (Heikkilä et al, 2009) • 16 STLBP (Shimada and Taniguchi, 2009) • • 256 εLBP • 256 Adaptive εLBP • 256 SCS-LBP (Xue et al, 2010) • • 16 SILTP (Liao et al, 2010) • 256 CS-LDP (Xue et al, 2011) • 16 SCBP (Xue et al, 2011) • 64 OCLBP (Lee et al, 2011) • 1536 Uniform LBP (Yuan et al, 2012) • 59 SALBP (Noh and Jeon, 2012) • 128 SLBP-AM (Yin et al, 2013) • • 256 LBSP (Bilodeau et al, 2013) • • 256 iLBP (Vishnyakov et al, 2014) • 256 CS-SILTP (Wu et al, 2014) • • 16 XCS-LBP (in this paper)…”
Section: Introductionmentioning
confidence: 99%
“…Zhao et al [23] proposed a background modeling method for motion detection in dynamic scenes based on type-2 fuzzy Gaussian mixture model [24] and Markov random field (MRF) [25]. In [26], authors introduced a background subtraction framework based on texture feature. Furthermore, color cues are clustered by the codebook scheme in order to refine the texture-based detection.…”
Section: Introductionmentioning
confidence: 99%