Chrome Extension
WeChat Mini Program
Use on ChatGLM

DRL-Based Energy Efficient Power Adaptation for Fast HARQ in the Finite Blocklength Regime.

Xinyi Wu,Deli Qiao

ICNC(2024)

Cited 0|Views0
No score
Abstract
In this paper, a point-to-point communication system with low latency and high reliability is studied. A fast hybrid automatic repeat request (HARQ) protocol is applied, where some HARQ feedback is omitted and the associated channel uses are incorporated for data transmission in fast HARQ. Based on relevant results on the decoding error probability over finite blocklength (FBL) codes, a long-term bit energy minimization problem is formulated in the presence of feedback delay and reliability constraints. Considering the non-convexity of the optimization problem and small decoding error probabilities, a finite-episode Markov Decision Process (MDP) with a double-layer penalty reward is formulated. An actor-critic based deep reinforcement learning (DRL) algorithm is subsequently designed. Through numerical evaluations, it is shown that compared with the conventional HARQ and the existing fast HARQ protocol, the proposed scheme is more energy efficient especially when the packet size is large.
More
Translated text
Key words
Energy Efficiency,Adaptive Power,Finite Blocklength,Hybrid Automatic Repeat Request,Finite Blocklength Regime,Optimization Problem,Non-convex Problem,Small Probability,Deep Reinforcement Learning,Markov Decision Process,Long-term Problems,Presence Of Constraints,Packet Size,Fast Protocol,Presence Of Delay,Deep Reinforcement Learning Algorithm,Feedback Delay,Delay Constraint,Reliability Constraints,Point-to-point Communication,Actor Network,Reward Function,Penalty Term,Transmission Failure,Internet Of Things,Forward Error Correction,Reliable Transmission,State-value Function,Information Bits,Conventional Scheme
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined