Chrome Extension
WeChat Mini Program
Use on ChatGLM

Efficient DRL-Based Congestion Control With Ultra-Low Overhead

IEEE-ACM TRANSACTIONS ON NETWORKING(2023)

Cited 2|Views21
No score
Abstract
Previous congestion control (CC) algorithms based on deep reinforcement learning (DRL) directly adjust flow sending rate to respond to dynamic bandwidth change, resulting in high inference overhead. Such overhead may consume considerable CPU resources and hurt the datapath performance. In this paper, we present, a hierarchical congestion control algorithm that fully utilizes the performance gain from deep reinforcement learning but with ultra-low overhead. At its heart, decouples the congestion control task into two subtasks in different timescales and handles them with different components: 1) lightweight CC executor that performs fine-grained control responding to dynamic bandwidth changes; and 2) RL agent that works at a coarse-grained level that generates control sub-policies for the CC executor. Such two-level control architecture can provide fine-grained DRL-based control with a low model inference overhead. Real-world experiments and emulations show that achieves consistent high performance across various network conditions with an ultra-low control overhead reduced by at least 80% compared to its DRL-based counterparts, similar to classic CC schemes such as Cubic.
More
Translated text
Key words
Congestion control,deep reinforcement learning,transport layer protocols
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined