BIG Cache Abstraction for Cache Networks

2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)(2017)

引用 13|浏览20
暂无评分
摘要
In this paper, we advocate the notion of "BIG" cache as an innovative abstraction for effectively utilizing the distributed storage and processing capacities of all servers in a cache network. The "BIG" cache abstraction is proposed to partly address the problem of (cascade) thrashing in a hierarchical network of cache servers, where it has been known that cache resources at intermediate servers are poorly utilized, especially under classical cache replacement policies such as LRU. We lay out the advantages of "BIG" cache abstraction and make a strong case both from a theoretical standpoint as well as through simulation analysis. We also develop the dCLIMB cache algorithm to minimize the overheads of moving objects across distributed cache boundaries and present a simple yet effective heuristic for addressing the cache allotment problem in the design of "BIG" cache abstraction.
更多
查看译文
关键词
Caching,Hierarchical Caching,Content Network Distribution,Cache Replacement Policies,BIG Cache,dCLIMB
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要