A QoS-aware workload routing and server speed scaling policy for energy-efficient data centers: a robust queueing theoretic approach
Maintaining energy efficiency in large data centers depends on the ability to manage workload routing and control server speeds according to fluctuating demand. The use of dynamic algorithms often means that management has to install the complicated software or expensive hardware needed to communicate with routers and servers. This paper proposes a static routing and server speed scaling policy that may achieve energy efficiency similar to dynamic algorithms and eliminate the necessity of frequent communications among resources without compromising quality of service (QoS). We use a robust queueing approach to consider the response time constraints, e.g., service level agreements (SLAs). We model each server as a G/G/1 processor sharing (PS) queue and use uncertainty sets to define the domain of random variables. A comparison with a dynamic algorithm shows that the proposed static policy provides competitive solutions in terms of energy efficiency and satisfactory QoS.
READ FULL TEXT