D-HAL: Distributed Hierarchical Adversarial Learning for Multi-Agent Interaction in Autonomous Intersection Management

03/05/2023
by   Guanzhou Li, et al.
0

Autonomous Intersection Management (AIM) provides a signal-free intersection scheduling paradigm for Connected Autonomous Vehicles (CAVs). Distributed learning method has emerged as an attractive branch of AIM research. Compared with centralized AIM, distributed AIM can be deployed to CAVs at a lower cost, and compared with rule-based and optimization-based method, learning-based method can treat various complicated real-time intersection scenarios more flexibly. Deep reinforcement learning (DRL) is the mainstream approach in distributed learning to address AIM problems. However, the large-scale simultaneous interactive decision of multiple agents and the rapid changes of environment caused by interactions pose challenges for DRL, making its reward curve oscillating and hard to converge, and ultimately leading to a compromise in safety and computing efficiency. For this, we propose a non-RL learning framework, called Distributed Hierarchical Adversarial Learning (D-HAL). The framework includes an actor network that generates the actions of each CAV at each step. The immediate discriminator evaluates the interaction performance of the actor network at the current step, while the final discriminator makes the final evaluation of the overall trajectory from a series of interactions. In this framework, the long-term outcome of the behavior no longer motivates the actor network in terms of discounted rewards, but rather through a designed adversarial loss function with discriminative labels. The proposed model is evaluated at a four-way-six-lane intersection, and outperforms several state-of-the-art methods on ensuring safety and reducing travel time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset