Multi Objective Prioritized Workflow Scheduling Using Deep Reinforcement Based Learning in Cloud Computing

IEEE ACCESS(2024)

引用 0|浏览3
暂无评分
摘要
Workflow Scheduling is a huge challenge in cloud paradigm as many number of workflows dynamically generated from various heterogeneous resources and task dependencies in each workflow varies from each other. Therefore, if a workflow with more number of dependencies is not scheduled onto an appropriate Virtual Machine i.e. with low processing capacity which leads to delay in executing workflows and it results in increase of makespan, cost, energy consumption. In order to effectively schedule complex workflows i.e. with more task dependencies, we propose a novel multi objective workflow scheduling algorithm using Deep reinforcement Learning. Initially, priorities of all workflows calculated based on their dependencies and then calculated priorities of VMs based on electricity cost at datacenters to map workflows onto precise VMs. These priorities are fed to scheduler which uses Deep Q-Network model to dynamically schedule tasks by considering both priorities of tasks and VMs. Extensive simulations carried out on workflowsim by considering realtime scientific workflows (Montage, cybershake, Epigenomics, LIGO). Our proposed MOPWSDRL compared against existing state of art approaches i.e. Heterogeneous Earliest First Deadline, Cat Swarm Optimization, Ant Colony Optimization. Results revealed that our proposed MOPDSWRL outperforms existing state of art algorithms by minimizing makespan, energy consumption.
更多
查看译文
关键词
Deep reinforcement learning,cloud computing,workflow scheduling,task dependencies,makespan,energy consumption
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要