Deep-Reinforcement-Learning-Based Scheduling with Contiguous Resource Allocation for Next-Generation Cellular Systems
Scheduling plays a pivotal role in multi-user wireless communications, since the quality of service of various users largely depends upon the allocated radio resources. In this paper, we propose a novel scheduling algorithm with contiguous frequency-domain resource allocation (FDRA) based on deep reinforcement learning (DRL) that jointly selects users and allocates resource blocks (RBs). The scheduling problem is modeled as a Markov decision process, and a DRL agent determines which user and how many consecutive RBs for that user should be scheduled at each RB allocation step. The state space, action space, and reward function are delicately designed to train the DRL network. More specifically, the originally quasi-continuous action space, which is inherent to contiguous FDRA, is refined into a finite and discrete action space to obtain a trade-off between the inference latency and system performance. Simulation results show that the proposed DRL-based scheduling algorithm outperforms other representative baseline schemes while having lower online computational complexity.
READ FULL TEXT