A Survey on Out-of-Distribution Evaluation of Neural NLP Models

06/27/2023
by   Xinzhe Li, et al.
0

Adversarial robustness, domain generalization and dataset biases are three active lines of research contributing to out-of-distribution (OOD) evaluation on neural NLP models. However, a comprehensive, integrated discussion of the three research lines is still lacking in the literature. In this survey, we 1) compare the three lines of research under a unifying definition; 2) summarize the data-generating processes and evaluation protocols for each line of research; and 3) emphasize the challenges and opportunities for future work.

READ FULL TEXT
research
12/15/2021

Measure and Improve Robustness in NLP Models: A Survey

As NLP models achieved state-of-the-art performances over benchmarks and...
research
12/31/2022

A Survey for In-context Learning

With the increasing ability of large language models (LLMs), in-context ...
research
02/15/2022

High-dimensional dynamic factor models: a selective survey and lines of future research

High-Dimensional Dynamic Factor Models are presented in detail: The main...
research
12/05/2017

Simulating Opportunistic Networks: Survey and Future Directions

Simulation is one of the most powerful tools we have for evaluating the ...
research
06/18/2022

Neural Shape-from-Shading for Survey-Scale Self-Consistent Bathymetry from Sidescan

Sidescan sonar is a small and cost-effective sensing solution that can b...
research
08/14/2017

Creating an A Cappella Singing Audio Dataset for Automatic Jingju Singing Evaluation Research

The data-driven computational research on automatic jingju (also known a...
research
03/22/2020

Review of data analysis in vision inspection of power lines with an in-depth discussion of deep learning technology

The widespread popularity of unmanned aerial vehicles enables an immense...

Please sign up or login with your details

Forgot password? Click here to reset