2007
DOI: 10.1088/1748-0221/2/11/p11007
|View full text |Cite
|
Sign up to set email alerts
|

A multivariate approach to heavy flavour tagging with cascade training

Abstract: This paper compares the performance of artificial neural networks and boosted decision trees, with and without cascade training, for tagging b-jets in a collider experiment. It is shown, using a Monte Carlo simulation of W H → lνqq events, that for a b-tagging efficiency of 50%, the light jet rejection power given by boosted decision trees without cascade training is about 55% higher than that given by artificial neural networks. The cascade training technique can improve the performance of boosted decision tr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2009
2009

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…As suggested in Ref. [67], the maximum track transverse and longitudinal impact parameter significances are included. In addition to the Generalized Jet Probability P b,LF (y jet = b|X = {x i }), the mean r Generalized Jet Probability is also quite important, on par with ∆Z(µ).…”
Section: Bottom Jets Vs Light Flavor Jetsmentioning
confidence: 99%
“…As suggested in Ref. [67], the maximum track transverse and longitudinal impact parameter significances are included. In addition to the Generalized Jet Probability P b,LF (y jet = b|X = {x i }), the mean r Generalized Jet Probability is also quite important, on par with ∆Z(µ).…”
Section: Bottom Jets Vs Light Flavor Jetsmentioning
confidence: 99%
“…Artificial Neural Networks (ANN) and Boosted Decision Trees (BDT) [1,2,3,4,5,6] are two important data analysis tools that have wide application in High Energy Physics experiments for particle identification and for event pattern recognition [7,8,9,10]. Both methods 'train' the 'networks' or the 'trees' based on a set of 'signal' and 'background' features (physical quantities) to obtain a powerful discriminant variable that distinguishes signal from background.…”
Section: Introductionmentioning
confidence: 99%
“…The main purpose of this paper is to compare the training performance with and without event reweighting. Performance comparisons between ANN and BDT can be found in the contexts of MiniBooNE neutrino oscillation analysis [2,4], D0 single top discovery [9] and B-tagging [10].…”
Section: Introductionmentioning
confidence: 99%