2018
DOI: 10.1007/jhep10(2018)121
|View full text |Cite
|
Sign up to set email alerts
|

Pulling out all the tops with computer vision and deep learning

Abstract: We apply computer vision with deep learning -in the form of a convolutional neural network (CNN) -to build a highly effective boosted top tagger. Previous work (the "DeepTop" tagger of Kasieczka et al) has shown that a CNN-based top tagger can achieve comparable performance to state-of-the-art conventional top taggers based on high-level inputs. Here, we introduce a number of improvements to the DeepTop tagger, including architecture, training, image preprocessing, sample size and color pixels. Our final CNN t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
178
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 163 publications
(178 citation statements)
references
References 76 publications
(112 reference statements)
0
178
0
Order By: Relevance
“…These are of the general form of IRC-safe angularities [74] with a generic radially-symmetric angular weighting function [85]. 12 To quantify the filters further, in Fig. 13c we plot the value of the learned filters as a function of the radial distance, taking an envelope over several radial slices.…”
Section: Extracting New Observables From the Modelmentioning
confidence: 99%
“…These are of the general form of IRC-safe angularities [74] with a generic radially-symmetric angular weighting function [85]. 12 To quantify the filters further, in Fig. 13c we plot the value of the learned filters as a function of the radial distance, taking an envelope over several radial slices.…”
Section: Extracting New Observables From the Modelmentioning
confidence: 99%
“…The top jets are truth-matched, and the images include the improved pre-processing taken from Ref. [9]. The constituents for the LoLa tagger are extracted through the Delphes energyflow algorithm, and the 4-momenta of the leading 200 constituents are stored.…”
Section: Performancementioning
confidence: 99%
“…To improve the performance of our taggers, we preprocess each image, following a similar procedure as in ref. [16]: centralization, rotation and flipping. In figures 3 and 4, we use φ and η to denote the new coordinate system for the images after preprocessing.…”
Section: Jet Imagesmentioning
confidence: 99%