On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

01/30/2023
by   Terry Yue Zhuo, et al.
0

Semantic parsing is a technique aimed at constructing a structured representation of the meaning of a natural-language question. Recent advancements in few-shot language models trained on code have demonstrated superior performance in generating these representations compared to traditional unimodal language models, which are trained on downstream tasks. Despite these advancements, existing fine-tuned neural semantic parsers are susceptible to adversarial attacks on natural-language inputs. While it has been established that the robustness of smaller semantic parsers can be enhanced through adversarial training, this approach is not feasible for large language models in real-world scenarios, as it requires both substantial computational resources and expensive human annotation on in-domain semantic parsing data. This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, . Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples. To address this challenge, we propose methods for improving robustness without the need for significant amounts of labeled data or heavy computational resources.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2021

Constrained Language Models Yield Few-Shot Semantic Parsers

We explore the use of large pretrained language models as few-shot seman...
research
02/02/2021

On Robustness of Neural Semantic Parsers

Semantic parsing maps natural language (NL) utterances into logical form...
research
01/25/2023

Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)

While large language models (LLMs) have demonstrated strong capability i...
research
02/04/2023

Chat2VIS: Generating Data Visualisations via Natural Language using ChatGPT, Codex and GPT-3 Large Language Models

The field of data visualisation has long aimed to devise solutions for g...
research
01/12/2021

Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence Labeling

A sufficient amount of annotated data is usually required to fine-tune p...
research
07/01/2021

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

Despite pre-trained language models have proven useful for learning high...
research
05/08/2023

Toward Adversarial Training on Contextualized Language Representation

Beyond the success story of adversarial training (AT) in the recent text...

Please sign up or login with your details

Forgot password? Click here to reset