Developmental Network Two, Its Optimality, and Emergent Turing Machines

08/04/2022
by   Juyang Weng, et al.
0

Strong AI requires the learning engine to be task non-specific and to automatically construct a dynamic hierarchy of internal features. By hierarchy, we mean, e.g., short road edges and short bush edges amount to intermediate features of landmarks; but intermediate features from tree shadows are distractors that must be disregarded by the high-level landmark concept. By dynamic, we mean the automatic selection of features while disregarding distractors is not static, but instead based on dynamic statistics (e.g. because of the instability of shadows in the context of landmark). By internal features, we mean that they are not only sensory, but also motor, so that context from motor (state) integrates with sensory inputs to become a context-based logic machine. We present why strong AI is necessary for any practical AI systems that work reliably in the real world. We then present a new generation of Developmental Networks 2 (DN-2). With many new novelties beyond DN-1, the most important novelty of DN-2 is that the inhibition area of each internal neuron is neuron-specific and dynamic. This enables DN-2 to automatically construct an internal hierarchy that is fluid, whose number of areas is not static as in DN-1. To optimally use the limited resource available, we establish that DN-2 is optimal in terms of maximum likelihood, under the condition of limited learning experience and limited resources. We also present how DN-2 can learn an emergent Universal Turing Machine (UTM). Together with the optimality, we present the optimal UTM. Experiments for real-world vision-based navigation, maze planning, and audition used DN-2. They successfully showed that DN-2 is for general purposes using natural and synthetic inputs. Their automatically constructed internal representation focuses on important features while being invariant to distractors and other irrelevant context-concepts.

READ FULL TEXT
research
06/30/2020

Conscious Intelligence Requires Lifelong Autonomous Programming For General Purposes

Universal Turing Machines [29, 10, 18] are well known in computer scienc...
research
07/18/2014

Motor Learning Mechanism on the Neuron Scale

Based on existing data, we wish to put forward a biological model of mot...
research
11/01/2016

Detecting Affordances by Visuomotor Simulation

The term "affordance" denotes the behavioral meaning of objects. We prop...
research
09/08/2003

What Is Working Memory and Mental Imagery? A Robot that Learns to Perform Mental Computations

This paper goes back to Turing (1936) and treats his machine as a cognit...
research
02/05/2018

Background subtraction using the factored 3-way restricted Boltzmann machines

In this paper, we proposed a method for reconstructing the 3D model base...
research
06/13/2018

Automatic formation of the structure of abstract machines in hierarchical reinforcement learning with state clustering

We introduce a new approach to hierarchy formation and task decompositio...
research
11/08/2015

(Yet) Another Theoretical Model of Thinking

This paper presents a theoretical, idealized model of the thinking proce...

Please sign up or login with your details

Forgot password? Click here to reset