An Enactivist account of Mind Reading in Natural Language Understanding

by   Peter Wallis, et al.

In this paper we apply our understanding of the radical enactivist agenda to a classic AI-hard problem. Natural Language Understanding is a sub-field of AI research that looked easy to the pioneers. Thus the Turing Test, in its original form, assumed that the computer could use language and the challenge was to fake human intelligence. It turned out that playing chess and formal logic were easy compared to the necessary language skills. The techniques of good old-fashioned AI (GOFAI) assume symbolic representation is the core of reasoning and human communication consisted of transferring representations from one mind to another. But by this model one finds that representations appear in another's mind, without appearing in the intermediary language. People communicate by mind reading it seems. Systems with speech interfaces such as Alexa and Siri are of course common but they are limited. Rather than adding mind reading skills, we introduced a "cheat" that enabled our systems to fake it. The cheat is simple and only slightly interesting to computer scientists and not at all interesting to philosophers. However, reading about the enactivist idea that we "directly perceive" the intentions of others, our cheat took on a new light and in this paper look again at how natural language understanding might actually work between humans.


page 1

page 2

page 3

page 4


Towards Language-driven Scientific AI

Inspired by recent and revolutionary developments in AI, particularly in...

Introducing the Talk Markup Language (TalkML):Adding a little social intelligence to industrial speech interfaces

Virtual Personal Assistants like Siri have great potential but such deve...

Mind Reading at Work: Cooperation without common ground

As Stefan Kopp and Nicole Kramer say in their recent paper[Frontiers in ...

On modelling the emergence of logical thinking

Recent progress in machine learning techniques have revived interest in ...

Impossibility of Unambiguous Communication as a Source of Failure in AI Systems

Ambiguity is pervasive at multiple levels of linguistic analysis effecti...

Mutual Theory of Mind for Human-AI Communication

From navigation systems to smart assistants, we communicate with various...

Neurosymbolic AI for Situated Language Understanding

In recent years, data-intensive AI, particularly the domain of natural l...

Please sign up or login with your details

Forgot password? Click here to reset