Depth-Wise Attention (DWAtt): A Layer Fusion Method for Data-Efficient Classification

09/30/2022
by   Muhammad ElNokrashy, et al.
0

Language Models pretrained on large textual data have been shown to encode different types of knowledge simultaneously. Traditionally, only the features from the last layer are used when adapting to new tasks or data. We put forward that, when using or finetuning deep pretrained models, intermediate layer features that may be relevant to the downstream task are buried too deep to be used efficiently in terms of needed samples or steps. To test this, we propose a new layer fusion method: Depth-Wise Attention (DWAtt), to help re-surface signals from non-final layers. We compare DWAtt to a basic concatenation-based layer fusion method (Concat), and compare both to a deeper model baseline – all kept within a similar parameter budget. Our findings show that DWAtt and Concat are more step- and sample-efficient than the baseline, especially in the few-shot setting. DWAtt outperforms Concat on larger data sizes. On CoNLL-03 NER, layer fusion shows 3.68-9.73 layer fusion models presented significantly outperform the baseline in various training scenarios with different data sizes, architectures, and training constraints.

READ FULL TEXT

page 4

page 5

page 6

research
12/01/2022

Adapted Multimodal BERT with Layer-wise Fusion for Sentiment Analysis

Multimodal learning pipelines have benefited from the success of pretrai...
research
12/10/2021

Pruning Pretrained Encoders with a Multitask Objective

The sizes of pretrained language models make them challenging and expens...
research
07/29/2020

Compressing Deep Neural Networks via Layer Fusion

This paper proposes layer fusion - a model compression technique that di...
research
03/31/2022

Misogynistic Meme Detection using Early Fusion Model with Graph Network

In recent years , there has been an upsurge in a new form of entertainme...
research
04/18/2021

Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity

When primed with only a handful of training samples, very large pretrain...
research
10/09/2022

ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models

Data-to-text generation is challenging due to the great variety of the i...
research
05/03/2022

Adaptable Adapters

State-of-the-art pretrained NLP models contain a hundred million to tril...

Please sign up or login with your details

Forgot password? Click here to reset