2021
DOI: 10.1007/s41781-020-00048-6
|View full text |Cite
|
Sign up to set email alerts
|

GeantV

Abstract: Full detector simulation was among the largest CPU consumers in all CERN experiment software stacks for the first two runs of the Large Hadron Collider. In the early 2010s, it was projected that simulation demands would scale linearly with increasing luminosity, with only partial compensation from increasing computing resources. The extension of fast simulation approaches to cover more use cases that represent a larger fraction of the simulation budget is only part of the solution, because of intrinsic precisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Successive upgrades to adapt to new computing paradigms such as object-oriented or parallel design have not touched the main modelling concepts described above, which served their purpose for decades of CPU evolution but are quickly becoming a limiting factor for computing hardware with acceleration. Recent R&D studies [27,28] have shown that today's state-of-the-art Geant-derived geometry codes such as VecGeom [29] represent a bottleneck for vectorized or massively parallel workflows. Deep polymorphic code stacks, low branch predictability, and incoherent memory access are some of the most important reasons for performance degradation when instruction execution coherence is a hardware constraint.…”
Section: Geometry Description and Navigationmentioning
confidence: 99%
See 2 more Smart Citations
“…Successive upgrades to adapt to new computing paradigms such as object-oriented or parallel design have not touched the main modelling concepts described above, which served their purpose for decades of CPU evolution but are quickly becoming a limiting factor for computing hardware with acceleration. Recent R&D studies [27,28] have shown that today's state-of-the-art Geant-derived geometry codes such as VecGeom [29] represent a bottleneck for vectorized or massively parallel workflows. Deep polymorphic code stacks, low branch predictability, and incoherent memory access are some of the most important reasons for performance degradation when instruction execution coherence is a hardware constraint.…”
Section: Geometry Description and Navigationmentioning
confidence: 99%
“…Moreover, new technologies [41][42][43] will allow detectors to sample particle showers with a high time resolution of the order of tens of picoseconds, which will need to be matched in simulation. Consequently, the simulation community has launched an ambitious R&D effort to upgrade physics models to improve accuracy and speed, re-implementing them from the ground up when necessary (e.g., GeantV [27], Adept [28], Celeritas [44]). Special attention will be needed to extend accurate physics simulation to the O (100)TeV domain, including new processes and models required to support the future collider programs.…”
Section: Future Needsmentioning
confidence: 99%
See 1 more Smart Citation