Many architects believe that major improvements in cost-energyperformance must now come from domain-specific hardware. This paper evaluates a custom ASIC-called a Tensor Processing Unit (TPU)-deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile responsetime requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X -30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X -80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.
Many architects believe that major improvements in cost-energyperformance must now come from domain-specific hardware. This paper evaluates a custom ASIC-called a Tensor Processing Unit (TPU)-deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile responsetime requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X-30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X-80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.
-An analytic method to evaluate frequency dependent losses in on-chip DC-DC buck converters is presented in this paper. Microprocessors or chipsets exhibit wide dynamic range of load current varying from 50mA up to 1.5 A per phase at full operation. Peak efficiency is shown to occur when the load current related losses and the inherent losses of the DC-DC converter are equal. Efficiency optimization methods are described for light and heavy load scenarios. The primary design objective is to maintain the load at the peak of the efficiency curve. A SPICE based circuit model of a DC-DC converter is applied to validate the proposed analytic methods.
An analytic method to evaluate frequency dependent losses in on-chip DC-DC buck converters is presented in this paper. These converters feature high switching losses caused by the skin effect in the package inductors. The frequency dependent air-core inductor model is shown to be critical in determining the optimal switching frequency. A general RLC parameter extraction method is also described. Considering the skin effect results in a 15% reduction in overall DC-DC converter losses. The proposed approach focuses on decreasing ripple current related losses, which are dominant in the target operating condition. Consequently, to obtain optimal efficiency, the switching frequency increases as the load current decreases. An intuitive explanation for this surprising result is that switching losses rise linearly with frequency whereas ripple losses decrease as n 1. . A SPICE based circuit model of a DC-DC converter is applied to validate the proposed analytic method. (Abstract)Keywords-on chip DC-DC; frequency dependent losses buck converter efficiency; air core inductor (keywords)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.