2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) 2020
DOI: 10.1109/icmla51294.2020.00017
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors

Abstract: Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster training of bigger models. This paper proposes a binary analysis tool enabling the emulation of lower precision numerical formats in Neural Network implementation without the need for hardware support. This tool is u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 26 publications
0
0
0
Order By: Relevance
“…Using this tool we show that FMA instructions are responsible for a significant chunk of the total computational workload when training well-known DNN models, as well as our 3DGAN use case, for which FMAs account for 48.80% of the total instruction count. By training the 3DGAN network for 60 epochs using a representative dataset, we have been able to show that MP training employing the BF16 numerical format is able to deliver the same level of accuracy as higher-precision approaches implemented using FP32 [71].…”
Section: Discussionmentioning
confidence: 98%
“…Using this tool we show that FMA instructions are responsible for a significant chunk of the total computational workload when training well-known DNN models, as well as our 3DGAN use case, for which FMAs account for 48.80% of the total instruction count. By training the 3DGAN network for 60 epochs using a representative dataset, we have been able to show that MP training employing the BF16 numerical format is able to deliver the same level of accuracy as higher-precision approaches implemented using FP32 [71].…”
Section: Discussionmentioning
confidence: 98%