Joint Optimization of Multi-Objective Reinforcement Learning with Policy Gradient Based Algorithm
Many engineering problems have multiple objectives, and the overall aim is to optimize a non-linear function of these objectives. In this paper, we formulate the problem of maximizing a non-linear concave function of multiple long-term objectives. A policy-gradient based model-free algorithm is proposed for the problem. To compute an estimate of the gradient, a biased estimator is proposed. The proposed algorithm is shown to achieve convergence to within an ϵ of the global optima after sampling 𝒪(M^4σ^2/(1-γ)^8ϵ^4) trajectories where γ is the discount factor and M is the number of the agents, thus achieving the same dependence on ϵ as the policy gradient algorithm for the standard reinforcement learning.
READ FULL TEXT