Condition Integration Memory Network: An Interpretation of the Meaning of the Neuronal Design

05/21/2021
by   Cheng Qian, et al.
0

This document introduces a hypothesized framework on the functional nature of primitive neural network. It discusses such an idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world and enable an adaptive system of behavior. More specifically, the network achieves these without participating in an algorithmic structure. When a neuron's activation represents some symbolic element in the environment, each of its synapses can indicate a potential change to the element and its future state. The efficacy of a synaptic connection further specifies the element's particular probability for, or contribution to, such a change. A neuron's activation is transformed to its postsynaptic targets as it fires, resulting in a chronological shift of the represented elements. As the inherent function of summation in a neuron integrates the various presynaptic contributions, the neural network mimics the collective causal relationship of events in the observed environment.

READ FULL TEXT
research
02/03/2023

A Hybrid Training Algorithm for Continuum Deep Learning Neuro-Skin Neural Network

In this brief paper, a learning algorithm is developed for Deep Learning...
research
09/06/2019

Differential Equation Units: Learning Functional Forms of Activation Functions from Data

Most deep neural networks use simple, fixed activation functions, such a...
research
07/03/2020

Training of Deep Learning Neuro-Skin Neural Network

In this brief paper, a learning algorithm is developed for Deep Learning...
research
07/06/2018

Development of a sensory-neural network for medical diagnosing

Performance of a sensory-neural network developed for diagnosing of dise...
research
06/14/2023

The ELM Neuron: an Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks

Traditional large-scale neuroscience models and machine learning utilize...
research
11/13/2022

Generalization Beyond Feature Alignment: Concept Activation-Guided Contrastive Learning

Learning invariant representations via contrastive learning has seen sta...
research
04/10/2017

Learning Important Features Through Propagating Activation Differences

The purported "black box"' nature of neural networks is a barrier to ado...

Please sign up or login with your details

Forgot password? Click here to reset