Large-Scale Compute-Intensive Analysis Via A Combined In-Situ And Co-Scheduling Workflow Approach

SC15: The International Conference for High Performance Computing, Networking, Storage and Analysis Austin Texas November, 2015(2015)

引用 37|浏览86
暂无评分
摘要
Large-scale simulations can produce hundreds of terabytes to petabytes of data, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in-situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in-situ and co-scheduling approaches for handling petabyte-scale outputs. We compare different analysis set-ups ranging from purely off-line, to purely in-situ to insitu/co-scheduling. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multicore, and many-core architectures.
更多
查看译文
关键词
many-core architectures,multi-core architectures,GPU,PISTON/VTK-m framework,petabyte-scale output handling,cosmological N-body code,HACC,data off-loading subsets,in-situ analysis,post-processing analysis,file system storage,large-scale simulations,co-scheduling workflow,in-situ workflow,large-scale compute-intensive analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要