谷歌Chrome浏览器插件
订阅小程序
在清言上使用

A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration

Journal of Grid Computing(2024)

引用 0|浏览6
暂无评分
摘要
In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.
更多
查看译文
关键词
Collaborative caching strategy,Delay and training energy gain,Dynamic federated optimization mechanism,Federated deep reinforcement learning,Training energy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要