Double Deep Q-Learning in Opponent Modeling

11/24/2022
by   Yangtianze Tao, et al.
0

Multi-agent systems in which secondary agents with conflicting agendas also alter their methods need opponent modeling. In this study, we simulate the main agent's and secondary agents' tactics using Double Deep Q-Networks (DDQN) with a prioritized experience replay mechanism. Then, under the opponent modeling setup, a Mixture-of-Experts architecture is used to identify various opponent strategy patterns. Finally, we analyze our models in two environments with several agents. The findings indicate that the Mixture-of-Experts model, which is based on opponent modeling, performs better than DDQN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset