Multi-agent reinforcement learning for ecological car-following control in mixed traffic

IEEE Transactions on Transportation Electrification(2024)

Cited 0|Views1
No score
Abstract
The push towards sustainable transportation emphasizes vehicular energy efficiency in mixed traffic scenarios. A research hotspot is the cooperative control of connected and automated vehicles (CAVs), particularly in contexts involving the uncertainties of human-driven vehicles (HDVs). Cooperative control strategies are pivotal in improving driving safety, traffic efficiency, and reducing energy consumption. Our study introduces a cooperative control strategy for CAVs in mixed traffic based on the multi-agent twin delayed deep deterministic policy gradient (MATD3) algorithm. We use the intelligent driver model (IDM) to calibrate and model human driving behaviors with 1737 car-following events from the Next Generation Simulation (NGSIM) dataset for their high frequency in real-world driving. The reward function of MATD3 integrates safety, traffic efficiency, passenger comfort, and energy efficiency. An action mask scheme is incorporated to prevent collisions, thereby boosting learning efficiency. Monte carlo simulation results show that our strategy outperforms IDM and model predictive control in improving energy efficiency by an average of 7.73% and 3.38% respectively. Furthermore, our framework offers extended benefits to HDVs, which achieve improved energy efficiency following the CAVs’ control pattern. A case study further demonstrate that a ‘moderate’ driving style results in lower energy consumption, effectively linking human behaviors to energy efficiency.
More
Translated text
Key words
Connected and automated vehicles,Multi-agent reinforcement learning,Eco-driving,Mixed traffic,Car-following,Intelligent driving,Model predictive control
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined