Chrome Extension
WeChat Mini Program
Use on ChatGLM

Mixing Hadoop and HPC workloads on parallel filesystems

SC(2009)

Cited 17|Views0
No score
Abstract
ABSTRACTMapReduce-tailored distributed filesystems---such as HDFS for Hadoop MapReduce---and parallel high-performance computing filesystems are tailored for considerably different workloads. The purpose of our work is to examine the performance of each filesystem when both sorts of workload run on it concurrently. We examine two workloads on two filesystems. For the HPC workload, we use the IOR checkpointing benchmark and the Parallel Virtual File System, Version 2 (PVFS); for Hadoop, we use an HTTP attack classifier and the CloudStore filesystem. We analyze the performance of each file system when it concurrently runs its "native" workload as well as the non-native workload.
More
Translated text
Key words
parallel filesystems,parallel virtual file system,cloudstore filesystem,different workloads,workload run,non-native workload,parallel high-performance computing filesystems,hpc workloads,attack classifier,ior checkpointing benchmark,hpc workload,mixing hadoop,hadoop mapreduce
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined