Deep Reinforcement Learning Based Mode Selection and Resource Allocation for Cellular V2X Communications

02/13/2020
by   Xinran Zhang, et al.
0

Cellular vehicle-to-everything (V2X) communication is crucial to support future diverse vehicular applications. However, for safety-critical applications, unstable vehicle-to-vehicle (V2V) links and high signalling overhead of centralized resource allocation approaches become bottlenecks. In this paper, we investigate a joint optimization problem of transmission mode selection and resource allocation for cellular V2X communications. In particular, the problem is formulated as a Markov decision process, and a deep reinforcement learning (DRL) based decentralized algorithm is proposed to maximize the sum capacity of vehicle-to-infrastructure users while meeting the latency and reliability requirements of V2V pairs. Moreover, considering training limitation of local DRL models, a two-timescale federated DRL algorithm is developed to help obtain robust model. Wherein, the graph theory based vehicle clustering algorithm is executed on a large timescale and in turn the federated learning algorithm is conducted on a small timescale. Simulation results show that the proposed DRL-based algorithm outperforms other decentralized baselines, and validate the superiority of the two-timescale federated DRL algorithm for newly activated V2V pairs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset