Effective Multi-User Delay-Constrained Scheduling with Deep Recurrent Reinforcement Learning

08/30/2022
by   Pihe Hu, et al.
2

Multi-user delay constrained scheduling is important in many real-world applications including wireless communication, live streaming, and cloud computing. Yet, it poses a critical challenge since the scheduler needs to make real-time decisions to guarantee the delay and resource constraints simultaneously without prior information of system dynamics, which can be time-varying and hard to estimate. Moreover, many practical scenarios suffer from partial observability issues, e.g., due to sensing noise or hidden correlation. To tackle these challenges, we propose a deep reinforcement learning (DRL) algorithm, named Recurrent Softmax Delayed Deep Double Deterministic Policy Gradient (𝚁𝚂𝙳4), which is a data-driven method based on a Partially Observed Markov Decision Process (POMDP) formulation. 𝚁𝚂𝙳4 guarantees resource and delay constraints by Lagrangian dual and delay-sensitive queues, respectively. It also efficiently tackles partial observability with a memory mechanism enabled by the recurrent neural network (RNN) and introduces user-level decomposition and node-level merging to ensure scalability. Extensive experiments on simulated/real-world datasets demonstrate that 𝚁𝚂𝙳4 is robust to system dynamics and partially observable environments, and achieves superior performances over existing DRL and non-DRL-based methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset