A Survey on Reinforcement Learning-Aided Caching in Mobile Edge Networks
Mobile networks are experiencing tremendous increase in data volume and user density. An efficient technique to alleviate this issue is to bring the data closer to the users by exploiting the caches of edge network nodes, such as fixed or mobile access points and even user devices. Meanwhile, the fusion of machine learning and wireless networks offers a viable way for network optimization as opposed to traditional optimization approaches which incur high complexity, or fail to provide optimal solutions. Among the various machine learning categories, reinforcement learning operates in an online and autonomous manner without relying on large sets of historical data for training. In this survey, reinforcement learning-aided mobile edge caching is presented, aiming at highlighting the achieved network gains over conventional caching approaches. Taking into account the heterogeneity of sixth generation (6G) networks in various wireless settings, such as fixed, vehicular and flying networks, learning-aided edge caching is presented, departing from traditional architectures. Furthermore, a categorization according to the desirable performance metric, such as spectral, energy and caching efficiency, average delay, and backhaul and fronthaul offloading is provided. Finally, several open issues are discussed, targeting to stimulate further interest in this important research field.
READ FULL TEXT