Enabling discovery data science through cross-facility workflows.

IEEE BigData(2021)

Cited 10|Views49
No score
Abstract
Experimental and observational instruments for scientific research (such as light sources, genome sequencers, accelerators, telescopes and electron microscopes) increasingly require High Performance Computing (HPC) scale capabilities for data analysis and workflow processing. Next-generation instruments are being deployed with higher resolutions and faster data capture rates, creating a big data crunch that cannot be handled by modest institutional computing resources. Often these big data analysis pipelines also require near real-time computing and have higher resilience requirements than the simulation and modeling workloads more traditionally seen at HPC centers. While some facilities have enabled workflows to run at a single HPC facility, there is a growing need to integrate capabilities across HPC facilities to enable cross-facility workflows, either to provide resilience to an experiment, increase analysis throughput capabilities, or to better match a workflow to a particular architecture. In this paper we describe the barriers to executing complex data analysis workflows across HPC facilities and propose an architectural design pattern for enabling scientific discovery using cross-facility workflows that includes orchestration services, application programming interfaces (APIs), data access and co-scheduling.
More
Translated text
Key words
cross-facility workflows,workflow portability,orchestration platforms,infrastructure,data analysis,containers,big data science
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined