research
∙
03/19/2022
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Recently, the problem of robustness of pre-trained language models (PrLM...
research
∙
08/29/2021
Span Fine-tuning for Pre-trained Language Models
Pre-trained language models (PrLM) have to carefully manage input units ...
research
∙
05/30/2021
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Pre-trained contextualized language models (PrLMs) have led to strong pe...
research
∙
12/30/2020