MoEfication: Conditional Computation of Transformer Models for Efficient Inference

10/05/2021
by   Zhengyan Zhang, et al.
0

Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost. Fortunately, we find by empirical study that, most inputs only activate a tiny ratio of neurons during inference. Hence, we explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon. We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication. Model MoEfication consists of two steps: (1) splitting the parameters of feed-forward neural networks (FFNs) into multiple parts as experts, and (2) building expert routers to decide which experts will be used for each input. To further improve the performance of MoEfied models, we can also fine-tune the models on downstream tasks, namely parameter calibration. Experimental results show that the MoEfied models can significantly reduce computation cost, e.g., only activating 20 performance degradation on several downstream tasks including text classification and reading comprehension.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset