2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 2011
DOI: 10.1109/avss.2011.6027338
|View full text |Cite
|
Sign up to set email alerts
|

An efficient pattern-less background modeling based on scale invariant local states

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…In both cases or the cases in between, the foreground regions became more distinguishable from the background, and this improves the foreground/background segmentation results. Figure 5 shows the foreground/background segmentation results, and table 2 lists the F -Score [13] of the results. The F -Score is defined as 2T P 2T P +F P +F N , where T P , F P and F N are true positive, false positive and false negative, respectively.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In both cases or the cases in between, the foreground regions became more distinguishable from the background, and this improves the foreground/background segmentation results. Figure 5 shows the foreground/background segmentation results, and table 2 lists the F -Score [13] of the results. The F -Score is defined as 2T P 2T P +F P +F N , where T P , F P and F N are true positive, false positive and false negative, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Any of these cases enhances the foreground to be more distinctive from the background, and therefore, improves the foreground/background segmentation results. In this paper, a texture-based PLPM background modeling [13] was used for illustration. The main reason to choose PLPM is that texture-based background modeling is usually more tolerant to outdoor scenes, and PLPM can perform the foreground/background segmentation in a very efficient manner.…”
Section: Foreground/background Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…We believe SILTP represents the state of the art in background modeling and hence compare our re- sults to this method. Scale-invariant local states [11] is a slight variation in the representation of the SILTP feature. For comparison, we use SILTP results from Liao et al because in Yuk and Wong [11], human judgement 2 was used to vary a size threshold parameter for each video.…”
Section: Resultsmentioning
confidence: 99%
“…Scale-invariant local states [11] is a slight variation in the representation of the SILTP feature. For comparison, we use SILTP results from Liao et al because in Yuk and Wong [11], human judgement 2 was used to vary a size threshold parameter for each video. We believe results from the latter fall under a different category of human-assisted backgrounding and hence do not compare to our method where no video-specific hand-tuning of parameters was done.…”
Section: Resultsmentioning
confidence: 99%