Skip to main content

Emergence of distributed coordination in multi-agents games through reinforcement learning

Event End Date
Event Title
Emergence of distributed coordination in multi-agents games through reinforcement learning
Event Details
<strong>School of Computational &amp; Integrative Sciences</strong> <strong>"Emergence of distributed coordination in multi-agents games through reinforcement learning"</strong> <strong>Dr. Anindya S. Chakrabarti</strong> Assistant professor Economics area Indian Institute of Management Vastrapur, Ahmedabad-380015 Date : <strong>8/8/2016</strong> <strong>Abstract: </strong>I will discuss two multi-agent coordination games. The first one is a generalization of the standard minority game and the second one is a game of equilibrium selection. Both games are played by large number of agents with finite information sets. Reinforcement learning provides a simple type of strategy which is useful to solve both the games. In fact, it outperforms previously proposed solutions in the literature. The second game characterizes emergence of a single dominant attribute out of a large number of competitors. Formally, N agents play a coordination game repeatedly which has exactly N Nash equilibria and all of the equlibria are equally preferred by the agents. The problem is to select one equilibrium out of N possible equilibria in the least number of attempts. We propose a number of heuristic rules based on reinforcement learning to solve the coordination problem. We see that the agents self-organize into clusters with varying intensities depending on the heuristic rule applied although all clusters but one are transitory in most cases. Finally, we characterize a trade-off in terms of the time requirement to achieve a degree of stability in strategies and the efficiency of such a solution.