2022
DOI: 10.1088/1748-0221/17/09/p09028
|View full text |Cite
|
Sign up to set email alerts
|

Calomplification — the power of generative calorimeter models

Abstract: Motivated by the high computational costs of classical simulations, machine-learned generative models can be extremely useful in particle physics and elsewhere. They become especially attractive when surrogate models can efficiently learn the underlying distribution, such that a generated sample outperforms a training sample of limited size. This kind of GANplification has been observed for simple Gaussian models. We show the same effect for a physics simulation, specifically photon showers in an electromagnet… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 45 publications
0
10
0
Order By: Relevance
“…As becomes apparent, the mean AUC of L2LFLOWS worsens with more showers, which is unsurprising, as with more statistics, the classifier can find more differences between the GEANT4and L2LFLOWS-generated showers. At an even larger number of showers used for classifier training, we would expect the finite size of the generator training set to become an issue, too [47][48][49]. Nevertheless, we observe that for a given number of showers, the BIB-AE showers are more separable from GEANT4 than the L2LFLOWS showers, indicating a better performance of L2LFLOWS.…”
Section: Classifier Testsmentioning
confidence: 78%
“…As becomes apparent, the mean AUC of L2LFLOWS worsens with more showers, which is unsurprising, as with more statistics, the classifier can find more differences between the GEANT4and L2LFLOWS-generated showers. At an even larger number of showers used for classifier training, we would expect the finite size of the generator training set to become an issue, too [47][48][49]. Nevertheless, we observe that for a given number of showers, the BIB-AE showers are more separable from GEANT4 than the L2LFLOWS showers, indicating a better performance of L2LFLOWS.…”
Section: Classifier Testsmentioning
confidence: 78%
“…Given the interpolation properties of neural networks and the benefits of their implicit bias in the applications described in Sec. 2, we can quantify the amplification of statistics-limited training data through generative networks [65,66].…”
Section: End-to-end Ml-generatorsmentioning
confidence: 99%
“…Neural networks work much like a fit and not like an interpolation in the sense that they do not reproduce the training data faithfully and instead learn a smooth approximation [65,66]. This is where we can gain some intuition for a NN-uncertainty treatment.…”
Section: Control and Precisionmentioning
confidence: 99%
“…For these applications, the generative models need to learn the underlying data distributions with the help of a strong inductive bias on the model architecture. In this way, it is possible to amplify the training statistics [2,3] as well as utilize generative modeling as a powerful data augmentation technique [4].…”
Section: Introductionmentioning
confidence: 99%