2021
DOI: 10.48550/arxiv.2104.04543
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding Event-Generation Networks via Uncertainties

Abstract: Following the growing success of generative neural networks in LHC simulations, the crucial question is how to control the networks and assign uncertainties to their event output. We show how Bayesian normalizing flow or invertible networks capture uncertainties from the training and turn them into an uncertainty on the event weight. Fundamentally, the interplay between density and uncertainty estimates indicates that these networks learn functions in analogy to parameter fits rather than binned event counts. … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
3

Relationship

6
3

Authors

Journals

citations
Cited by 15 publications
(24 citation statements)
references
References 66 publications
(79 reference statements)
0
24
0
Order By: Relevance
“…They allow access to the Jacobian and both directions of the mapping, linking density estimation in the physics and latent spaces in a completely controlled manner. We have used the flexible INN setup successfully for precision event generation [93,94], unfolding detector effects [95], and QCD or astro-particle inference [96,97].…”
Section: Innmentioning
confidence: 99%
“…They allow access to the Jacobian and both directions of the mapping, linking density estimation in the physics and latent spaces in a completely controlled manner. We have used the flexible INN setup successfully for precision event generation [93,94], unfolding detector effects [95], and QCD or astro-particle inference [96,97].…”
Section: Innmentioning
confidence: 99%
“…[44], for a discussion of trainingrelated uncertainties using Bayesian normalizing flows see Refs. [45,46]. The properties of online training, specifically seeing every event independently and only once, are in tension with training generative models.…”
Section: Online Trainingmentioning
confidence: 99%
“…These weights are independent for each flow in the ensemble and are kept constant as the flow iterates over one buffer. We leave the further exploration of this ad hoc solution and alternative methods such as Bayesian flows [45,52,53] to the future.…”
Section: Data Model Trainingmentioning
confidence: 99%
“…GANplification arises, intuitively, from the fact that neural networks work like classical parametric fits [4,5], and they are particularly effective when we want to interpolate in many dimensions. This feature is behind the success of the NNPDF parton densities [6] as the first mainstream ML-application in particle theory.…”
Section: Introductionmentioning
confidence: 99%