How Neural Architectures Affect Deep Learning for Communication Networks?

11/03/2021
by   Yifei Shen, et al.
0

In recent years, there has been a surge in applying deep learning to various challenging design problems in communication networks. The early attempts adopt neural architectures inherited from applications such as computer vision, which suffer from poor generalization, scalability, and lack of interpretability. To tackle these issues, domain knowledge has been integrated into the neural architecture design, which achieves near-optimal performance in large-scale networks and generalizes well under different system settings. This paper endeavors to theoretically validate the importance and effects of neural architectures when applying deep learning to design communication networks. We prove that by exploiting permutation invariance, a common property in communication networks, graph neural networks (GNNs) converge faster and generalize better than fully connected multi-layer perceptrons (MLPs), especially when the number of nodes (e.g., users, base stations, or antennas) is large. Specifically, we prove that under common assumptions, for a communication network with n nodes, GNNs converge O(n log n) times faster and their generalization error is O(n) times lower, compared with MLPs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset