Model-Based Deep Reinforcement Learning Framework for Channel Access in Wireless Networks

Jong In Park, Jun Byung Chae,Kae Won Choi

IEEE INTERNET OF THINGS JOURNAL(2024)

Cited 0|Views0
No score
Abstract
In this article, we propose a model-based reinforcement learning (RL) algorithm for wireless channel access. The model-based RL is a relatively new RL paradigm that integrates the concept of the world model into the agent. The world model is built based on the neural network and is capable of predicting the future trajectories of actions, rewards, and observations. In this article, we focus on developing a sophisticated world model based on the partially observable Markov decision process (POMDP). The proposed world model can describe the environment in which only the partial observation emitted from the hidden state is available. For establishing the wireless channel access problem, we introduce two separate environments, one of which describes the channel occupancy dynamics and the other governs data traffic arrival patterns. Both environments are modeled by the proposed partially observable MDP (POMDP)-based world model. For designing an agent capable of making a decision on the next action, we propose a planning algorithm, which makes use of the future trajectories generated from the trained world model differently from the existing model-free RL algorithms. We have conducted extensive simulations to verify the performance of the proposed method in various wireless channel access scenarios.
More
Translated text
Key words
Actor-critic,model-based reinforcement learning (RL),partially observable Markov decision process (POMDP),wireless channel access,world model
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined