Delay-Aware Multi-Agent Reinforcement Learning for Cooperative Adaptive Cruise Control with Model-based Stability Enhancement
CoRR(2024)
Abstract
Cooperative Adaptive Cruise Control (CACC) represents a quintessential
control strategy for orchestrating vehicular platoon movement within Connected
and Automated Vehicle (CAV) systems, significantly enhancing traffic efficiency
and reducing energy consumption. In recent years, the data-driven methods, such
as reinforcement learning (RL), have been employed to address this task due to
their significant advantages in terms of efficiency and flexibility. However,
the delay issue, which often arises in real-world CACC systems, is rarely taken
into account by current RL-based approaches. To tackle this problem, we propose
a Delay-Aware Multi-Agent Reinforcement Learning (DAMARL) framework aimed at
achieving safe and stable control for CACC. We model the entire decision-making
process using a Multi-Agent Delay-Aware Markov Decision Process (MADA-MDP) and
develop a centralized training with decentralized execution (CTDE) MARL
framework for distributed control of CACC platoons. An attention
mechanism-integrated policy network is introduced to enhance the performance of
CAV communication and decision-making. Additionally, a velocity optimization
model-based action filter is incorporated to further ensure the stability of
the platoon. Experimental results across various delay conditions and platoon
sizes demonstrate that our approach consistently outperforms baseline methods
in terms of platoon safety, stability and overall performance.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined