Dynamic voltage and frequency scaling (DVFS) is an important solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies. The possibility to manually set these frequencies is a great opportunity for application tuning, which can focus on the best applicationdependent setting. However, this task is not straightforward because of the large set of possible configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This paper proposes a method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. Our modeling approach, based on machine learning, first predicts speedup and normalized energy over the default frequency configuration. Then, it combines the two models into a multi-objective one that predicts a Pareto-set of frequency configurations. The approach uses static code features, is built on a set of carefully designed microbenchmarks, and can predict the best frequency settings of a new kernel without executing it. Test results show that our modeling approach is very accurate on predicting extrema points and Pareto set for ten out of twelve test benchmarks, and discover frequency configurations that dominate the default configuration in either energy or performance. CCS CONCEPTS • Computer systems organization → Parallel architectures; • Hardware → Power and energy.
Energy optimization is an increasingly important aspect of today’s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance.
Dynamic frequency scaling is broadly available among different modern computer architectures, making it possible to improve the performance and energy efficiency of an application by carefully setting the core frequency. However, while an exhaustive tuning is feasible on simple single-kernel applications, in real-world applications comprised of multiple tasks, the set of possible frequency setting combinations is too large to be exhaustively evaluated.This work deals with the problem of optimizing a multi-task GPU application with frequency scaling. We focus on different scalarizations of the problem by optimizing for performance, energy consumption, as well as energy-delay product (EDP) and energydelay-two product (ED 2 P). We propose FLEXDP, a new flexible framework that finds the optimal core-frequency configuration over multiple kernels, allowing multiple frequency changes between kernel executions, and taking change overheads into account.The proposed approaches are evaluated on an NVIDIA Titan X. Experimental results on five applications demonstrate that FLEXDP outperforms the default and autoboost configurations with respect to performance, energy efficiency, EDP, and ED 2 P. CCS CONCEPTS• Computer systems organization → Parallel architectures; • Hardware → Power and energy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.