Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, which has motivated the adoption of the BrainFloat16 format (BF16). BF16 features 1 sign, 8 exponent and 7 explicit mantissa bits. Some approaches to train DNNs achieve significant performance benefits by using the BF16 format. However, these approaches must combine BF16 with the standard IEEE 754 Floating-Point 32-bit (FP32) format to achieve state-of-the-art training accuracy, which limits the impact of adopting BF16. This paper proposes the first approach able to train complex DNNs entirely using the BF16 format. We propose a new class of FMA operators, FMA bf16 n m , that entirely rely on BF16 FMA hardware instructions and deliver the same accuracy as FP32. FMA bf16 n m operators achieve performance improvements within the 1.28-1.35× range on ResNet101 with respect to FP32. FMA bf16 n m enables training complex DNNs on simple low-end hardware devices without requiring expensive FP32 FMA functional units.
Deep Neural Networks (DNNs) have become ubiquitous in a wide range of application domains. Despite their success, training DNNs is an expensive task that has motivated the use of reduced numerical precision formats to improve performance and reduce power consumption. Emulation techniques are a good fit to understand the properties of new numerical formats on a particular workload. However, current SoA techniques are not able to perform these tasks quickly and accurately on a wide variety of workloads.We propose FASE, a Fast, Accurate, and Seamless Emulator that leverages dynamic binary translation to enable emulation of custom numerical formats. FASE is fast: allowing emulation of large unmodified workloads; accurate: emulating at the instruction operand level; and seamless: as it does not require any code modifications and works on any application or DNN framework without any language, compiler, or source code access restrictions.
Deep Neural Networks (DNNs) have become ubiquitous in a wide range of application domains. Despite their success, training DNNs is an expensive task that has motivated the use of reduced numerical precision formats to improve performance and reduce power consumption. Emulation techniques are a good fit to understand the properties of new numerical formats on a particular workload. However, current SoA techniques are not able to perform these tasks quickly and accurately on a wide variety of workloads.We propose FASE, a Fast, Accurate, and Seamless Emulator that leverages dynamic binary translation to enable emulation of custom numerical formats. FASE is fast: allowing emulation of large unmodified workloads; accurate: emulating at the instruction operand level; and seamless: as it does not require any code modifications and works on any application or DNN framework without any language, compiler, or source code access restrictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.