2011
DOI: 10.1016/j.patcog.2010.11.022
|View full text |Cite
|
Sign up to set email alerts
|

Object detection based on a robust and accurate statistical multi-point-pair model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 45 publications
(31 citation statements)
references
References 17 publications
0
31
0
Order By: Relevance
“…We compared our algorithm with three methods: (1) GMM, [18] a standardized method among independent pixel-wise models; (2) Sheikh's KDE, [15] a representative method among spatialdependent models, which is different from the original KDE that it employs KDE over the joint domain (location) and range (intensity) representation of image pixels; and (3) GAP, [16] which is a predecessor and has a homologous methodology with CP3. The parameters for GMM were set as defaults in OpenCV tool; Sheikh's KDE [15] was set according to the author's recommendations with the size of the model [26,26,26,21,31]; and in GAP, W G ¼ 20,…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We compared our algorithm with three methods: (1) GMM, [18] a standardized method among independent pixel-wise models; (2) Sheikh's KDE, [15] a representative method among spatialdependent models, which is different from the original KDE that it employs KDE over the joint domain (location) and range (intensity) representation of image pixels; and (3) GAP, [16] which is a predecessor and has a homologous methodology with CP3. The parameters for GMM were set as defaults in OpenCV tool; Sheikh's KDE [15] was set according to the author's recommendations with the size of the model [26,26,26,21,31]; and in GAP, W G ¼ 20,…”
Section: Resultsmentioning
confidence: 99%
“…Its accurate characteristics make it operable under several challenging severe imaging conditions. Compared with our earlier work GAP, [16] the proposed method has the following advantages: (1) CP3 employs a unique parametrized statistical model to describe each pixel-pair's co-occurrence rather than a fixed global doublesided threshold for all pixel-pairs in GAP; and (2) CP3 derives a self-adaptive threshold for each target pixel to select better-quality supporting pixels rather than a predefined threshold in GAP. Compared with some state-of-the-art independent pixel-wise models or spatial-dependence models, such an accurate background model significantly enhance the robustness of object detection in severe imaging conditions, e.g., foggy scene, low-light and noise, sudden illumination change, and narrow dynamic range, which can be observed in experimental section.…”
Section: Robust Object Detection In Severe Imaging Conditionsmentioning
confidence: 99%
See 2 more Smart Citations
“…To overcome this problem, we utilize a novel robust feature called Grayscale Arranging Pairs (GAP) [10], [11], which was originally proposed for background subtraction. Compared with other background subtraction methods, the GAP feature builds a more accurate and robust background model that is flexible enough to handle different sets of complex conditions.…”
Section: Introductionmentioning
confidence: 99%