CAPI-Flash Accelerated Persistent Read Cache for Apache Cassandra

2018 IEEE 11th International Conference on Cloud Computing (CLOUD)(2018)

引用 5|浏览121
暂无评分
摘要
In real-world NoSQL deployments, users have to trade off CPU, memory, I/O bandwidth and storage space to achieve the required performance and efficiency goals. Data compression is a vital component to improve storage space efficiency, but reading compressed data increases response time. Therefore, compressed data stores rely heavily on using the memory as a cache to speed up read operations. However, as large DRAM capacity is expensive, NoSQL databases have become costly to deploy and hard to scale. In our work, we present a persistent caching mechanism for Apache Cassandra on a high-throughput, low-latency FPGA-based NVMe Flash accelerator (CAPI-Flash), replacing Cassandra's in-memory cache. Because flash is dramatically less expensive per byte than DRAM, our caching mechanism provides Apache Cassandra with access to a large caching layer at lower cost. The experimental results show that for read-intensive workloads, our caching layer provides up to 85% improved throughput and also reduces CPU usage by 25% compared to default Cassandra.
更多
查看译文
关键词
Apache Cassandra,NoSQL Databases,CAPI Flash,Persistent Caching,Flash Storage,Nvme Flash,Power Systems,Cloud Computing,Distributed Databases,Read Cache,High Throughput Caching,Storage Accelerators,High performance Caching,CAPI,Solid State Drives,High Performance Computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要