Low-complexity Recurrent Neural Network-based Polar Decoder with Weight Quantization Mechanism

10/29/2018
by   Chieh-Fang Teng, et al.
0

Polar codes have drawn much attention and been adopted in 5G New Radio (NR) due to their capacity-achieving performance. Recently, as the emerging deep learning (DL) technique has breakthrough achievements in many fields, neural network decoder was proposed to obtain faster convergence and better performance than belief propagation (BP) decoding. However, neural networks are memory-intensive and hinder the deployment of DL in communication systems. In this work, a low-complexity recurrent neural network (RNN) polar decoder with codebook-based weight quantization is proposed. Our test results show that we can effectively reduce the memory overhead by 98 complexity with slight performance loss.

READ FULL TEXT
research
12/11/2019

Low-Complexity LSTM-Assisted Bit-Flipping Algorithm for Successive Cancellation List Polar Decoder

Polar codes have attracted much attention in the past decade due to thei...
research
11/24/2018

Polar Decoding on Sparse Graphs with Deep Learning

In this paper, we present a sparse neural network decoder (SNND) of pola...
research
11/05/2019

Unsupervised Learning for Neural Network-based Polar Decoder via Syndrome Loss

With the rapid growth of deep learning in many fields, machine learning-...
research
07/11/2019

Neural Network-based Equalizer by Utilizing Coding Gain in Advance

Recently, deep learning has been exploited in many fields with revolutio...
research
05/24/2019

On Recurrent Neural Networks for Sequence-based Processing in Communications

In this work, we analyze the capabilities and practical limitations of n...
research
01/06/2020

Syndrome-Enabled Unsupervised Learning for Channel Adaptive Blind Equalizer with Joint Optimization Mechanism

With the rapid growth of deep learning in many fields, machine learning-...

Please sign up or login with your details

Forgot password? Click here to reset