Parameter Sharing Decoder Pair for Auto Composing

10/31/2019
by   Xu Zhao, et al.
0

Auto Composing is an active and appealing research area in the past few years, and lots of efforts have been put into inventing more robust models to solve this problem. With the fast evolution of deep learning techniques, some deep neural network-based language models are becoming dominant. Notably, the transformer structure has been proven to be very efficient and promising in modeling texts. However, the transformer-based language models usually contain huge number of parameters and the size of the model is usually too large to put in production for some storage limited applications. In this paper, we propose a parameter sharing decoder pair (PSDP), which reduces the number of parameters dramatically and at the same time maintains the capability of generating understandable and reasonable compositions. Works created by the proposed model are presented to demonstrate the effectiveness of the model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset