Deep reinforcement learning based resource allocation in edge-cloud gaming

Multimedia Tools and Applications(2024)

Cited 0|Views4
No score
Abstract
Cloud gaming refers to a type of online gaming where the games are hosted on remote servers while players can stream and play them over the internet. Since game interactions must complete a round trip between players and rendering servers, meeting latency requirements is crucial for gameplay quality. In order to mitigate latency issue, edge servers can be employed as rendering servers, which allows players to connect to a nearby edge server if they cannot connect to any cloud server due to latency constraints. Moreover, by dividing the workload of game scene rendering into foreground and background tasks, edge servers can handle foreground scene rendering while cloud servers handle background scene rendering, which can further reduce latency. However, the resource allocation problem in the edge cloud gaming scenario is challenging due to its high complexity and the dynamic nature of player arrivals and departures. This paper presents an edge-cloud architecture and a resource allocation algorithm using Deep Reinforcement Learning (DRL) to minimize the cost of cloud gaming incorporating both cloud servers and edge servers. Applying DRL to solve online resource allocation problem is not a straightforward task; therefore, we propose several techniques to overcome the difficulties. Experimental results show that the DRL-based approach achieves up to 50% lower accumulated cost compared to baselines in all scenarios, while maintaining low assignment time. Additionally, DRL demonstrates higher scalability under various arrival rates.
More
Translated text
Key words
Cloud gaming,Workload sharing,Workload splitting,Cost minimization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined