Deep Metric Tensor Regularized Policy Gradient

by   Gang Chen, et al.

Policy gradient algorithms are an important family of deep reinforcement learning techniques. Many past research endeavors focused on using the first-order policy gradient information to train policy networks. Different from these works, we conduct research in this paper driven by the believe that properly utilizing and controlling Hessian information associated with the policy gradient can noticeably improve the performance of policy gradient algorithms. One key Hessian information that attracted our attention is the Hessian trace, which gives the divergence of the policy gradient vector field in the Euclidean policy parametric space. We set the goal to generalize this Euclidean policy parametric space into a general Riemmanian manifold by introducing a metric tensor field g_ab in the parametric space. This is achieved through newly developed mathematical tools, deep learning algorithms, and metric tensor deep neural networks (DNNs). Armed with these technical developments, we propose a new policy gradient algorithm that learns to minimize the absolute divergence in the Riemannian manifold as an important regularization mechanism, allowing the Riemannian manifold to smoothen its policy gradient vector field. The newly developed algorithm is experimentally studied on several benchmark reinforcement learning problems. Our experiments clearly show that the new metric tensor regularized algorithm can significantly outperform its counterpart that does not use our regularization technique. Additional experimental analysis further suggests that the trained metric tensor DNN and the corresponding metric tensor g_ab can effectively reduce the absolute divergence towards zero in the Riemannian manifold.


page 1

page 2

page 3

page 4


Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms?

We study how the behavior of deep policy gradient algorithms reflects th...

Variational Inference for Policy Gradient

Inspired by the seminal work on Stein Variational Inference and Stein Va...

Hessian transport Gradient flows

We derive new gradient flows of divergence functions in the probability ...

Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow

We present a direct (primal only) derivation of Mirror Descent as a "par...

Merging Deterministic Policy Gradient Estimations with Varied Bias-Variance Tradeoff for Effective Deep Reinforcement Learning

Deep reinforcement learning (DRL) on Markov decision processes (MDPs) wi...

Episodic Policy Gradient Training

We introduce a novel training procedure for policy gradient methods wher...

Non-Parametric Stochastic Policy Gradient with Strategic Retreat for Non-Stationary Environment

In modern robotics, effectively computing optimal control policies under...

Please sign up or login with your details

Forgot password? Click here to reset