2012
DOI: 10.1049/el.2011.3160
|View full text |Cite
|
Sign up to set email alerts
|

Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance

Abstract: Esta es la versión de autor de la comunicación de congreso publicada en: This is an author produced version of a paper published in: Copyright: © The Institution of Engineering and Technology 2012El acceso a la versión del editor puede requerir la suscripción del recurso Access to the published version may require subscription

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
6
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 4 publications
1
6
0
Order By: Relevance
“…Our result is 3.7% more accurate than the results from [14]. We argue that this improvement is mainly due to: 1) the diversity of patch shapes that makes the histogram feature take into consideration (most of the times) suitable regions, 2) the contour feature searching for edges in the internal neighborhood of a blob and in the external neighborhood of the blob convex hull, 3) combining the SUSAN with Sobel edges in the contour feature, and 4) replacing fixed feature thresholds for dynamic ones.…”
Section: Resultssupporting
confidence: 45%
See 2 more Smart Citations
“…Our result is 3.7% more accurate than the results from [14]. We argue that this improvement is mainly due to: 1) the diversity of patch shapes that makes the histogram feature take into consideration (most of the times) suitable regions, 2) the contour feature searching for edges in the internal neighborhood of a blob and in the external neighborhood of the blob convex hull, 3) combining the SUSAN with Sobel edges in the contour feature, and 4) replacing fixed feature thresholds for dynamic ones.…”
Section: Resultssupporting
confidence: 45%
“…In this table, TP stands for the number of true classified abandoned object, FP stands for the misclassified abandoned objects, TN stands for true classified removed objects, and FN misclassified removed objects. The sixth column presents the recall (TP/(TP + FN)), the seventh column presents the accuracy ((TP + TN)/ (FP + FN)) of the proposed technique, and the last column presents the best accuracy results achieved by the creators of this dataset [14].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [118], the authors proposed a complete abandoned object detection system using a fusion hybrid approach to classify potential candidates. Another innovative method also combining edge and color information, called Pixel Color Contrast (PCC), was proposed in [119] for this aim. Figure 8 illustrates an example of the color-based approach proposed in [119].…”
Section: Stages Of Abandoned Object Detectionmentioning
confidence: 99%
“…With this in mind, all identification information recorded, if any, is subject to full-scale masking, or the entirety of the visual data is encrypted to prevent leakage by a third party. However, when images are recovered by a legitimate user with access rights, all of the masked images contained in the visual data will be recovered, or the entirety of the video recording will be decrypted and sent to the administrator in the form of original visual information [5][6][7][8][9]. The original footage contains recovered identification information regarding not only the target being tracked, but also the others whose images are captured on the same video, which may seriously compromise the privacy of those non-target individuals [10][11][12][13].…”
Section: Introductionmentioning
confidence: 99%