Transparency Helps Reveal When Language Models Learn Meaning

10/14/2022
by   Zhaofeng Wu, et al.
0

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong transparency), both autoregressive and masked language models successfully learn to emulate semantic relations between expressions. However, when denotations are changed to be context-dependent with the language otherwise unmodified, this ability degrades. Turning to natural language, our experiments with a specific phenomenon – referential opacity – add to the growing body of evidence that current language models do not well-represent natural language semantics. We show this failure relates to the context-dependent nature of natural language form-meaning mappings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset