HTMRL: Biologically Plausible Reinforcement Learning with Hierarchical Temporal Memory

by   Jakob Struye, et al.

Building Reinforcement Learning (RL) algorithms which are able to adapt to continuously evolving tasks is an open research challenge. One technology that is known to inherently handle such non-stationary input patterns well is Hierarchical Temporal Memory (HTM), a general and biologically plausible computational model for the human neocortex. As the RL paradigm is inspired by human learning, HTM is a natural framework for an RL algorithm supporting non-stationary environments. In this paper, we present HTMRL, the first strictly HTM-based RL algorithm. We empirically and statistically show that HTMRL scales to many states and actions, and demonstrate that HTM's ability for adapting to changing patterns extends to RL. Specifically, HTMRL performs well on a 10-armed bandit after 750 steps, but only needs a third of that to adapt to the bandit suddenly shuffling its arms. HTMRL is the first iteration of a novel RL approach, with the potential of extending to a capable algorithm for Meta-RL.


page 1

page 2

page 3

page 4


Bandit-Based Policy Invariant Explicit Shaping for Incorporating External Advice in Reinforcement Learning

A key challenge for a reinforcement learning (RL) agent is to incorporat...

Reset-Free Lifelong Learning with Skill-Space Planning

The objective of lifelong reinforcement learning (RL) is to optimize age...

Factored Adaptation for Non-Stationary Reinforcement Learning

Dealing with non-stationarity in environments (i.e., transition dynamics...

Episodic Memory Deep Q-Networks

Reinforcement learning (RL) algorithms have made huge progress in recent...

Adapting Behaviour for Learning Progress

Determining what experience to generate to best facilitate learning (i.e...

A Reinforcement Learning Approach to Optimize Available Network Bandwidth Utilization

Efficient data transfers over high-speed, long-distance shared networks ...

BAM: Bayes with Adaptive Memory

Online learning via Bayes' theorem allows new data to be continuously in...

Please sign up or login with your details

Forgot password? Click here to reset