2016
DOI: 10.1007/978-3-319-47106-8_18
|View full text |Cite
|
Sign up to set email alerts
|

Deep Parameter Optimisation for Face Detection Using the Viola-Jones Algorithm in OpenCV

Abstract: Abstract. OpenCV is a commonly used computer vision library containing a wide variety of algorithms for the AI community. This paper uses deep parameter optimisation to investigate improvements to face detection using the Viola-Jones algorithm in OpenCV, allowing a tradeoff between execution time and classification accuracy. Our results show that execution time can be decreased by 48% if a 1.80% classification inaccuracy is permitted (compared to 1.04% classification inaccuracy of the original, unmodified algo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…Its main task is detection of persons, however can be used also in different kind of object recognition. This algorithm is implemented in the Open Source Computer Vision Library (OpenCV, 2016) and has the following characteristics: the image has the integral representation (Gonzalez and Woods, 2007), which allows finding the necessary objects at a high speed:  The necessary object is being searched using Haar's 19 signs (Haar Cascades, 2008) and (Padilla et al, 2012);  Boosting is used, which is the selection of the most suitable characteristics for the desired object in the selected part of the image (Viola and Jones, 2004);  A classifier previously trained on faces, (Bruce et al, 2016), which are scalable up to the 20x20 pixels size, accepts features at the input, and outputs the binary result "true" or "false";  Cascades of signs, consisting of several classifiers, are used to quickly discard windows in which the object is not found (Felzenszwalb et al, 2010).…”
Section: Description Of the Parametric Representation Methodsmentioning
confidence: 99%
“…Its main task is detection of persons, however can be used also in different kind of object recognition. This algorithm is implemented in the Open Source Computer Vision Library (OpenCV, 2016) and has the following characteristics: the image has the integral representation (Gonzalez and Woods, 2007), which allows finding the necessary objects at a high speed:  The necessary object is being searched using Haar's 19 signs (Haar Cascades, 2008) and (Padilla et al, 2012);  Boosting is used, which is the selection of the most suitable characteristics for the desired object in the selected part of the image (Viola and Jones, 2004);  A classifier previously trained on faces, (Bruce et al, 2016), which are scalable up to the 20x20 pixels size, accepts features at the input, and outputs the binary result "true" or "false";  Cascades of signs, consisting of several classifiers, are used to quickly discard windows in which the object is not found (Felzenszwalb et al, 2010).…”
Section: Description Of the Parametric Representation Methodsmentioning
confidence: 99%
“…However, we can find no evidence that permitting approximation will have any negative effect on reducing energy consumption. Many multi-objective optimisation methods are available [45] and have already seen adoption in genetic improvement research [13], [14], [42], [62]. We believe integrating 'output quality' as an objective can significantly benefit future projects.…”
Section: Discussionmentioning
confidence: 99%
“…if testOracle(O P , t) ∧ J < b l then 12: E ← E ∪ {δ} 13: end if 14: end for 15: return E Even when sampling, we cannot evaluate every variant against all available testcases. Variants that are inert, produce software that breaks hard-constraints or increase energy consumption are uninteresting.…”
Section: Algorithm 1 the Filteringmentioning
confidence: 99%
See 1 more Smart Citation
“…The runtime stands for entire processing time of the big data sets over the machine (Bruce et al, 2016). The memory consumption is the memory required while doing the processing of entire big data can be classified as virtual memory, heap memory and physical memory (Zhuang et al, 2015).…”
Section: Grid Computing Systemmentioning
confidence: 99%