Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey

11/01/2021
by   Bonan Min, et al.
0

Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field. We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2021

A Comprehensive Exploration of Pre-training Language Models

Recently, the development of pre-trained language models has brought nat...
research
02/17/2022

A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models

With the increasing of model capacity brought by pre-trained language mo...
research
06/07/2020

Pre-training Polish Transformer-based Language Models at Scale

Transformer-based language models are now widely used in Natural Languag...
research
05/23/2022

Prompt Tuning for Discriminative Pre-trained Language Models

Recent works have shown promising results of prompt tuning in stimulatin...
research
10/14/2019

Q8BERT: Quantized 8Bit BERT

Recently, pre-trained Transformer based language models such as BERT and...
research
04/21/2022

Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias

The remarkable progress in Natural Language Processing (NLP) brought abo...
research
03/30/2022

TextPruner: A Model Pruning Toolkit for Pre-Trained Language Models

Pre-trained language models have been prevailed in natural language proc...

Please sign up or login with your details

Forgot password? Click here to reset