Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings

10/24/2022
by   Sedrick Scott Keh, et al.
0

This work builds upon the Euphemism Detection Shared Task proposed in the EMNLP 2022 FigLang Workshop, and extends it to few-shot and zero-shot settings. We demonstrate a few-shot and zero-shot formulation using the dataset from the shared task, and we conduct experiments in these settings using RoBERTa and GPT-3. Our results show that language models are able to classify euphemistic terms relatively well even on new terms unseen during training, indicating that it is able to capture higher-level concepts related to euphemisms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset