Why Linguistics Will Thrive in the 21st Century: A Reply to Piantadosi (2023)

by   Jordan Kodner, et al.

We present a critical assessment of Piantadosi's (2023) claim that "Modern language models refute Chomsky's approach to language," focusing on four main points. First, despite the impressive performance and utility of large language models (LLMs), humans achieve their capacity for language after exposure to several orders of magnitude less data. The fact that young children become competent, fluent speakers of their native languages with relatively little exposure to them is the central mystery of language learning to which Chomsky initially drew attention, and LLMs currently show little promise of solving this mystery. Second, what can the artificial reveal about the natural? Put simply, the implications of LLMs for our understanding of the cognitive structures and mechanisms underlying language and its acquisition are like the implications of airplanes for understanding how birds fly. Third, LLMs cannot constitute scientific theories of language for several reasons, not least of which is that scientific theories must provide interpretable explanations, not just predictions. This leads to our final point: to even determine whether the linguistic and cognitive capabilities of LLMs rival those of humans requires explicating what humans' capacities actually are. In other words, it requires a separate theory of language and cognition; generative linguistics provides precisely such a theory. As such, we conclude that generative linguistics as a scientific discipline will remain indispensable throughout the 21st century and beyond.


Collateral facilitation in humans and language models

Are the predictions of humans and language models affected by similar th...

Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction

Analogical reasoning is essential for human cognition, allowing us to co...

Do Large Language Models know what humans know?

Humans can attribute mental states to others, a capacity known as Theory...

Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

How should we compare the capabilities of language models and humans? He...

A blind spot for large language models: Supradiegetic linguistic information

Large Language Models (LLMs) like ChatGPT reflect profound changes in th...

Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner

During their first years of life, infants learn the language(s) of their...

Do language models learn typicality judgments from text?

Building on research arguing for the possibility of conceptual and categ...

Please sign up or login with your details

Forgot password? Click here to reset