Learning-Based Control With Decentralized Dynamic Event-Triggering for Vehicle Systems

IEEE Transactions on Industrial Informatics(2023)

Cited 5|Views11
No score
Abstract
The optimal control of multi-input system can be described by a multiplayer nonzero-sum differential game. This article theoretically presents an event-based adaptive learning scheme to approximate the Nash equilibrium, and practically addresses the cruise control problem for Caltech vehicle systems. This design is deployed in two aspects. On one hand, the reinforcement learning is implemented through critic neural network architecture and recalling stored experience data. On the other hand, in view of that each player’s preference is different, the decentralized triggering manner is considered to reduce communication. Based on the continuous state, the local sampled state is defined for each player, and a static triggering mechanism is formulated first. The decentralized dynamic triggering is then promoted by designing an auxiliary variable whose dynamics are constructed using static triggering information. Next, the proposed learning scheme is examined on a four-player numerical system. Finally, the learning-based controller is tested on a single-vehicle system under different tracking commands, and then, it is extended to multivehicle systems to realize cooperative optimization by introducing a novel game-in-game structure.
More
Translated text
Key words
Cruise control,differential game,dynamic triggering,neural network (NN),reinforcement learning (RL),vehicle system
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined