Generalized Reinforcement Learning: Experience Particles, Action Operator, Reinforcement Field, Memory Association, and Decision Concepts
Learning a control policy that involves time-varying and evolving system dynamics often poses a great challenge to mainstream reinforcement learning algorithms. In most standard methods, actions are often assumed to be a rigid, fixed set of choices that are sequentially applied to the state space in a predefined manner. Consequently, without resorting to substantial re-learning processes, the learned policy lacks the ability in adapting to variations in the action set and the action's "behavioral" outcomes. In addition, the standard action representation and the action-induced state transition mechanism inherently limit how reinforcement learning can be applied in complex, real-world applications primarily due to the intractability of the resulting large state space and the lack of facility to generalize the learned policy to the unknown part of the state space. This paper proposes a Bayesian-flavored generalized reinforcement learning framework by first establishing the notion of parametric action model to better cope with uncertainty and fluid action behaviors, followed by introducing the notion of reinforcement field as a physics-inspired construct established through "polarized experience particles" maintained in the learning agent's working memory. These particles effectively encode the dynamic learning experience that evolves over time in a self-organizing way. On top of the reinforcement field, we will further generalize the policy learning process to incorporate high-level decision concepts by considering the past memory as having an implicit graph structure, in which the past memory instances (or particles) are interconnected with similarity between decisions defined, and thereby, the "associative memory" principle can be applied to augment the learning agent's world model.
READ FULL TEXT