2005
DOI: 10.1155/asp.2005.1035
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Hardware Implementation Tool for a Discrete Adaboost-Based Decision Algorithm

Abstract:

We propose a method and a tool for automatic generation of hardware implementation of a decision rule based on the Adaboost algorithm. We review the principles of the classification method and we evaluate its hardware implementation cost in terms of FPGA's slice, using different weak classifiers based on the general concept of hyperrectangle. The main novelty of our approach is that the tool allows the user to find automatically an appropriate tradeoff between classification performances and hardware i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2007
2007
2014
2014

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 29 publications
(26 reference statements)
0
10
0
Order By: Relevance
“…These operators, which represent one of the main part of the processing, can be eventually implemented in parallel [25] as discussed in Sect. 4.2.3.1.…”
Section: Supervised Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…These operators, which represent one of the main part of the processing, can be eventually implemented in parallel [25] as discussed in Sect. 4.2.3.1.…”
Section: Supervised Classificationmentioning
confidence: 99%
“…Therefore, we have selected this algorithm despite its relatively low-processing time compared to the global system processing time. Due to its popularity in image analysis, an efficient implementation of this classifier represents a significant contribution by itself as attested to by the number of references dealing with this issue [25,39,40]. A large list of publications dealing with boosting is available on [4].…”
Section: 222mentioning
confidence: 99%
“…The core of the training of the second stage is the hyperrectangle set S H determination from a set of samples S. We use the method proposed in [30] to train the weak classifier. The basic idea is to build around each sample {x i , y i } ∈ S a box or hyper-rectangle H(x i ) containing no sample of opposite classes, where x i is the confidence vectors of all samples in this paper and y i = ±1.…”
Section: The Second Stage Of Weak Classifiermentioning
confidence: 99%
“…This is why we privilege the simple methods based on the distances between groups and the methods of the regression type. Some previous work show the feasibility of pattern recognition in FPGA based systems (Miteran J., 2005). An LDA method has been chosen.…”
Section: Pattern Recognition: Linear Discriminant Analysismentioning
confidence: 99%