2007
DOI: 10.1007/s00216-007-1461-2
|View full text |Cite
|
Sign up to set email alerts
|

Study of the feasibility of distinguishing cigarettes of different brands using an Adaboost algorithm and near-infrared spectroscopy

Abstract: The feasibility of utilizing an Adaboost algorithm in conjuction with near-infrared (NIR) spectroscopy to automatically distinguish cigarettes of different brands was explored. Simple linear discriminant analysis (LDA) was used as the base algorithm to train all weak classifiers in Adaboost. Both principal component analysis (PCA) and its kernel version (kernel principal component analysis, KPCA) were used for feature extraction and were also compared to each other. The influence of the training set size on th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 41 publications
(15 citation statements)
references
References 30 publications
0
15
0
Order By: Relevance
“…Boosting refers to a general and provably effective method of producing a very accurate classification rule by combining rough and moderately accurate rules (weak classifiers). Adaboost (adaptive boosting) is the most popular boosting algorithm [23][24][25]. In Adaboost, the weak classifiers are trained sequentially, one at a time.…”
Section: Theory and Methodsmentioning
confidence: 99%
“…Boosting refers to a general and provably effective method of producing a very accurate classification rule by combining rough and moderately accurate rules (weak classifiers). Adaboost (adaptive boosting) is the most popular boosting algorithm [23][24][25]. In Adaboost, the weak classifiers are trained sequentially, one at a time.…”
Section: Theory and Methodsmentioning
confidence: 99%
“…The training set and the test set consist of 56 and 57 samples, respectively. By this means, the training set and the test set exhibited approximately the same information distribution, which make it valid to use the test set for measuring the performance of a calibration model constructed on the training set [29]. Table 1 summarizes a few descriptive statistics including mean, standard deviation (SD), minimum, maximum.…”
Section: Sample Set Partitioning and Primary Statisticsmentioning
confidence: 99%
“…The most popular boosting algorithm, i.e., AdaBoost [24,25,29], is used in this paper. Suppose there is a two-class classification problem.…”
Section: Boosting Classificationmentioning
confidence: 99%