2013
DOI: 10.2172/1093707
|View full text |Cite
|
Sign up to set email alerts
|

Data co-processing for extreme scale analysis level II ASC milestone (4745).

Abstract: Exascale supercomputing will embody many revolutionary changes in the hardware and software of high-performance computing. A particularly pressing issue is gaining insight into the science behind the exascale computations. Power and I/O speed constraints will fundamentally change current visualization and analysis workflows. A traditional post-processing workflow involves storing simulation results to disk and later retrieving them for visualization and data analysis. However, at exascale, scientists and analy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…New practices associated with realtime experimentation and observational data are driving more complex data sharing between supercomputer simulations and experimental data produced by national science facilities such as acclerators, colliders, and light sources [7]. Projected limitations in the I/O system motivate integrating simulation and analysis, coupling complex physics codes, and developing fully integrated application workflows [9,29,37].…”
Section: Evolving Sharing Considerationsmentioning
confidence: 99%
“…New practices associated with realtime experimentation and observational data are driving more complex data sharing between supercomputer simulations and experimental data produced by national science facilities such as acclerators, colliders, and light sources [7]. Projected limitations in the I/O system motivate integrating simulation and analysis, coupling complex physics codes, and developing fully integrated application workflows [9,29,37].…”
Section: Evolving Sharing Considerationsmentioning
confidence: 99%
“…In addition, the large-scale results are more interesting from a supercomputing perspective because they demonstrate capability not possible on smaller clusters. For a complete look at all the results, see our ASC milestone final report [34]. Figure 7 shows the total runtime of each workflow for the large data set.…”
Section: Total Execution Timementioning
confidence: 99%
“…The usage model for HPC machines is evolving from sequences of simulation and analysis tasks, communicating via long-term storage, toward a more dynamic, "compositional" approach, where applications consist of complex combinations of coupled codes, data services, and tools. Projected limitations in the I/O system motivate the integration of simulation and analysis, coupling of complex physics codes, and development of fullyintegrated application workflows [35,27,5]. Expected power constraints motivate the need to co-locate applications to avoid data movement wherever possible [27].…”
Section: Factors Influencing Os Designmentioning
confidence: 99%
“…Projected limitations in the I/O system motivate the integration of simulation and analysis, coupling of complex physics codes, and development of fullyintegrated application workflows [35,27,5]. Expected power constraints motivate the need to co-locate applications to avoid data movement wherever possible [27]. New HPC use cases for streaming and graph analytics require features of the OS/R such as global addressing, massive multithreading, and event-based processing that are not well supported on traditional HPC systems [14].…”
Section: Factors Influencing Os Designmentioning
confidence: 99%
See 1 more Smart Citation