Network Resource Allocation Strategy Based on Deep Reinforcement Learning

by   Shidong Zhang, et al.

The traditional Internet has encountered a bottleneck in allocating network resources for emerging technology needs. Network virtualization (NV) technology as a future network architecture, the virtual network embedding (VNE) algorithm it supports shows great potential in solving resource allocation problems. Combined with the efficient machine learning (ML) algorithm, a neural network model close to the substrate network environment is constructed to train the reinforcement learning agent. This paper proposes a two-stage VNE algorithm based on deep reinforcement learning (DRL) (TS-DRL-VNE) for the problem that the mapping result of existing heuristic algorithm is easy to converge to the local optimal solution. For the problem that the existing VNE algorithm based on ML often ignores the importance of substrate network representation and training mode, a DRL VNE algorithm based on full attribute matrix (FAM-DRL-VNE) is proposed. In view of the problem that the existing VNE algorithm often ignores the underlying resource changes between virtual network requests, a DRL VNE algorithm based on matrix perturbation theory (MPT-DRL-VNE) is proposed. Experimental results show that the above algorithm is superior to other algorithms.


page 1

page 3


A Multi-Agent Deep Reinforcement Learning Approach for RAN Resource Allocation in O-RAN

Artificial intelligence (AI) and Machine Learning (ML) are considered as...

Dynamic Virtual Network Embedding Algorithm based on Graph Convolution Neural Network and Reinforcement Learning

Network virtualization (NV) is a technology with broad application prosp...

VNE Solution for Network Differentiated QoS and Security Requirements: From the Perspective of Deep Reinforcement Learning

The rapid development and deployment of network services has brought a s...

Deep Reinforcement Learning for System-on-Chip: Myths and Realities

Neural schedulers based on deep reinforcement learning (DRL) have shown ...

Space-Air-Ground Integrated Multi-domain Network Resource Orchestration based on Virtual Network Architecture: a DRL Method

Traditional ground wireless communication networks cannot provide high-q...

No Free Lunch: Balancing Learning and Exploitation at the Network Edge

Over the last few years, the DRL paradigm has been widely adopted for 5G...

Deep Reinforcement Learning Based Spectrum Allocation in Integrated Access and Backhaul Networks

We develop a framework based on deep reinforce-ment learning (DRL) to so...

Please sign up or login with your details

Forgot password? Click here to reset