摘要
Q-learning can provide a robust and natural means for agents to learn how to coordinate their action choices in multi-agent systems. We examine some of the factors that can influence the dynamics of the learning process in such a setting. We first distinguish reinforcement learners that are unaware of (or ignore) the presence of other agents from those that explicitly attempt to learn the value of joint actions and the strategies of their counterparts. We study Q-learning in cooperative multi-agent systems under these two perspectives, focusing on the convergence to Nash equilibrium. We propose a novel exploration strategy to increase the likelihood of convergence to an optimal equilibrium.
- 出版日期2000
- 单位上海交通大学