A centralized reinforcement learning method for multi-agent job scheduling in Grid

09/11/2016
by   Milad Moradi, et al.
0

One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset