2022
DOI: 10.21468/scipostphys.12.6.187
|View full text |Cite
|
Sign up to set email alerts
|

Comparing machine learning and interpolation methods for loop-level calculations

Abstract: The need to approximate functions is ubiquitous in science, either due to empirical constraints or high computational cost of accessing the function. In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals. A wide variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking. Comparing these methods is typically highly dependent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 59 publications
0
2
0
Order By: Relevance
“…This combination offers new perspectives for system simulation that go beyond 'traditional' characteristic fields (feed forward neural networks) or response surfaces (recurrent neural networks): complex correlations inside data can be directly derived from the data and encoded by the network parameters. Especially for high dimensional data fields this can be much more memory efficient than by using conventional interpolation methods (Chahrour and Wells 2022). Moreover neural networks can be applied in order to increase the level of detail offered by system models at low performance costs, e.g.…”
Section: Support For Hybrid Modellingmentioning
confidence: 99%
“…This combination offers new perspectives for system simulation that go beyond 'traditional' characteristic fields (feed forward neural networks) or response surfaces (recurrent neural networks): complex correlations inside data can be directly derived from the data and encoded by the network parameters. Especially for high dimensional data fields this can be much more memory efficient than by using conventional interpolation methods (Chahrour and Wells 2022). Moreover neural networks can be applied in order to increase the level of detail offered by system models at low performance costs, e.g.…”
Section: Support For Hybrid Modellingmentioning
confidence: 99%
“…GANplification arises, intuitively, from the fact that neural networks work like classical parametric fits [4,5], and they are particularly effective when we want to interpolate in many dimensions. This feature is behind the success of the NNPDF parton densities [6] as the first mainstream ML-application in particle theory.…”
Section: Introductionmentioning
confidence: 99%