Supplementary Material: Implementation and Experiments for GAU-based Model

05/12/2022
by   Zhenjie Liu, et al.
0

In February this year Google proposed a new Transformer variant called FLASH, which has a faster speed, lower VRAM footprint and better performance. This is achieved by designing a performant layer named GAU (Gated Attention Unit), which combines the Attention layer and FFN. In this paper, some implementation details are re-analyzed both theoretically and practically. We then propose a novel GAU-based model and pre-train it on a Chinese corpus. Results of the CLUE benchmark show that our model achieves a dev average score of 75.02, 1 than RoFormerV1 and being 45 RoFormerV2.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset