Meaning without reference in large language models

by   Steven T. Piantadosi, et al.
berkeley college

The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings. Contrary to claims that LLMs possess no meaning whatsoever, we argue that they likely capture important aspects of meaning, and moreover work in a way that approximates a compelling account of human cognition in which meaning arises from conceptual role. Because conceptual role is defined by the relationships between internal representational states, meaning cannot be determined from a model's architecture, training data, or objective function, but only by examination of how its internal states relate to each other. This approach may clarify why and how LLMs are so successful and suggest how they can be made more human-like.


page 1

page 2

page 3

page 4


Large Language Models Converge on Brain-Like Word Representations

One of the greatest puzzles of all time is how understanding arises from...

Conceptual Organization is Revealed by Consumer Activity Patterns

Meaning may arise from an element's role or interactions within a larger...

On the Computation of Meaning, Language Models and Incomprehensible Horrors

We integrate foundational theories of meaning with a mathematical formal...

The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

This paper discusses the current critique against neural network-based N...

Testing AI performance on less frequent aspects of language reveals insensitivity to underlying meaning

Advances in computational methods and big data availability have recentl...

Language Models as Agent Models

Language models (LMs) are trained on collections of documents, written b...

Human-machine cooperation for semantic feature listing

Semantic feature norms, lists of features that concepts do and do not po...

Please sign up or login with your details

Forgot password? Click here to reset