A blind spot for large language models: Supradiegetic linguistic information

by   Julia Witte Zimmerman, et al.

Large Language Models (LLMs) like ChatGPT reflect profound changes in the field of Artificial Intelligence, achieving a linguistic fluency that is impressively, even shockingly, human-like. The extent of their current and potential capabilities is an active area of investigation by no means limited to scientific researchers. It is common for people to frame the training data for LLMs as "text" or even "language". We examine the details of this framing using ideas from several areas, including linguistics, embodied cognition, cognitive science, mathematics, and history. We propose that considering what it is like to be an LLM like ChatGPT, as Nagel might have put it, can help us gain insight into its capabilities in general, and in particular, that its exposure to linguistic training data can be productively reframed as exposure to the diegetic information encoded in language, and its deficits can be reframed as ignorance of extradiegetic information, including supradiegetic linguistic information. Supradiegetic linguistic information consists of those arbitrary aspects of the physical form of language that are not derivable from the one-dimensional relations of context – frequency, adjacency, proximity, co-occurrence – that LLMs like ChatGPT have access to. Roughly speaking, the diegetic portion of a word can be thought of as its function, its meaning, as the information in a theoretical vector in a word embedding, while the supradiegetic portion of the word can be thought of as its form, like the shapes of its letters or the sounds of its syllables. We use these concepts to investigate why LLMs like ChatGPT have trouble handling palindromes, the visual characteristics of symbols, translating Sumerian cuneiform, and continuing integer sequences.


page 2

page 6

page 14

page 18


LLM Cognitive Judgements Differ From Human

Large Language Models (LLMs) have lately been on the spotlight of resear...

Testing AI performance on less frequent aspects of language reveals insensitivity to underlying meaning

Advances in computational methods and big data availability have recentl...

Dissociating language and thought in large language models: a cognitive perspective

Today's large language models (LLMs) routinely generate coherent, gramma...

Large Linguistic Models: Analyzing theoretical linguistic abilities of LLMs

The performance of large language models (LLMs) has recently improved to...

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

How does language inform our downstream thinking? In particular, how do ...

I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

Visual metaphors are powerful rhetorical devices used to persuade or com...

Why Linguistics Will Thrive in the 21st Century: A Reply to Piantadosi (2023)

We present a critical assessment of Piantadosi's (2023) claim that "Mode...

Please sign up or login with your details

Forgot password? Click here to reset