2016
DOI: 10.1063/1.4952921
|View full text |Cite
|
Sign up to set email alerts
|

Real-time data-intensive computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…However, we also see that such a framework results in beamline scientists and users not having complete control over computational resources which are subject to facility timetabling, downtimes, and regulations. Since real‐time feedback is crucial for some experiments, the inevitable computational queuing times from the use of the large computational resources can result in serious limitations for such users …”
Section: Scientific Community‐driven Big Data Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we also see that such a framework results in beamline scientists and users not having complete control over computational resources which are subject to facility timetabling, downtimes, and regulations. Since real‐time feedback is crucial for some experiments, the inevitable computational queuing times from the use of the large computational resources can result in serious limitations for such users …”
Section: Scientific Community‐driven Big Data Approachesmentioning
confidence: 99%
“…In recent years, a vision of a “superfacility” has been proposed by the Advanced Light Source (ALS) and the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (LBNL) in California, which is meant to help users focus only on those scientific meaningful data from the ALS which could lead to new discoveries, rather than have to deal with an unstructured massive amount of raw data . Such an approach is achievable only by giving users real‐time simultaneous access to the experimental, computational, and algorithmic resources at synchrotrons.…”
Section: Introductionmentioning
confidence: 99%
“…The performance of transfers between storage and the widearea network (WAN) have also been studied with increasing enthusiasm as networking technologies and methodologies [20] have advanced to the point where geographically distributed workflows and large, public datasets are now enabling new scientific discovery [2], [3], [21]. Globus, GridFTP-based transfer tools, and bbcp are ubiquitous high-performance data transfer tools used to this end [1]- [3], and characterizing the ways in which Globus and GridFTP are used in multisite workflows and distributed HPC environments is the subject of growing interest [12], [22], [23]. As with the efforts to characterize user interactions with archival storage systems, however, studies of wide-area data transfers have explored only the storage-WAN and WAN-storage components of data transfer.…”
Section: Introductionmentioning
confidence: 99%