2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) 2019
DOI: 10.1109/ipdpsw.2019.00081
|View full text |Cite
|
Sign up to set email alerts
|

Learning Everywhere: Pervasive Machine Learning for Effective High-Performance Computation

Abstract: The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the "Learning Everywhere" paradigm for HPC. We introduce the concept of "effective performance" that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 26 publications
0
35
0
Order By: Relevance
“…After the training (Stage 2 of the workflow), we were able to sample the folded states with less than 6 µs of aggregate sampling. Without any ML, the aggregate sampling required to fold Fs-peptide was 14 µs, which implies that the effective performance [33] gain in sampling using ML based approaches is about 2.33 (14µs to 6µs). Individual simulations in the ML driven workflow were only 0.1 µs in length, as opposed to 0.5 µs in traditional (non-ML) sampling, indicating that by culling unproductive trajectories we can sample the native state of Fs-peptide.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…After the training (Stage 2 of the workflow), we were able to sample the folded states with less than 6 µs of aggregate sampling. Without any ML, the aggregate sampling required to fold Fs-peptide was 14 µs, which implies that the effective performance [33] gain in sampling using ML based approaches is about 2.33 (14µs to 6µs). Individual simulations in the ML driven workflow were only 0.1 µs in length, as opposed to 0.5 µs in traditional (non-ML) sampling, indicating that by culling unproductive trajectories we can sample the native state of Fs-peptide.…”
Section: Discussionmentioning
confidence: 99%
“…In between these two different ends of the spectrum, lies the motif of Fig.1 where DL models and methods can be used to guide either individual simulations by determining optimal parameters of exploration, or by intelligently determining regions of phase space to sample, i.e., enhanced sampling. Needless, to say, these three levels are not mutually exclusive and can operate concurrently and collectively to enhanced global computational efficiency, and giving rise to the concept of Learning Everywhere [32], [33] to enhance computational impact. Although this work investigates and focuses on the computational motif in Fig.…”
Section: Discussionmentioning
confidence: 99%
“…Increasingly statistical methods are used to understand performance and to make predictions, e. g., for resource (re)configurations decisions [22]. Kremer-Herman et al [23] propose a model for recommending the optimal infrastructure configuration for master/worker applications.…”
Section: B Streaming Performance and Modelingmentioning
confidence: 99%
“…Introduction This taxonomy of research at the intersection of Machine Learning and Simulations builds on papers below. 1) A quadrology of papers on learning everywhere [1]- [4].…”
Section: Introductionmentioning
confidence: 99%
“…There are also presentations at BDEC [5] and at IPDPS [6]. 2) Jeffrey Dean presentation at NeurIPS 2017 on Machine learning for systems and systems for machine learning [7] 3) Microsoft 2018 Faculty Summit presentations on AI for Systems [8], [9] 4) Satoshi Matsuoka on the convergence of AI and HPC [10] 5) An NSF funded project mainly focused on HPCforML [11], [12] We now describe the categories used below to categorize papers [1][2][3], [5], [13] • HPCforML: Using HPC to execute and enhance ML performance, or using HPC simulations to train ML algorithms (theory-guided machine learning), which are then used to understand experimental data or simulations. • MLforHPC: Using ML to enhance HPC applications and systems…”
Section: Introductionmentioning
confidence: 99%