Reinforcement Learning on Job Shop Scheduling Problems Using Graph Networks

This paper presents a novel approach for job shop scheduling problems using deep reinforcement learning. To account for the complexity of production environment, we employ graph neural networks to model the various relations within production environments. Furthermore, we cast the JSSP as a distributed optimization problem in which learning agents are individually assigned to resources which allows for higher flexibility with respect to changing production environments. The proposed distributed RL agents used to optimize production schedules for single resources are running together with a co-simulation framework of the production environment to obtain the required amount of data. The approach is applied to a multi-robot environment and a complex production scheduling benchmark environment. The initial results underline the applicability and performance of the proposed method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset