Learning in medium field games.

Authors
Publication date
2018
Publication type
Thesis
Summary Mean-field games (MFG) are a class of differential games in which each agent is infinitesimal and interacts with a huge population of agents. In this thesis, we raise the question of the actual formation of the MFG equilibrium. Indeed, since the game is very complex, it is unrealistic to assume that agents can actually compute the equilibrium configuration. This suggests that if the equilibrium configuration arises, it is because the agents have learned to play the game. Thus, the main question is to find learning procedures in mean-field games and to analyze their convergences to an equilibrium. We were inspired by learning schemes in static games and tried to apply them to our dynamic MFG model. We focus particularly on applications of fictitious play and online mirror descent on different types of mean field games: Potential, Monotonic or Discrete.
Topics of the publication
Themes detected by scanR from retrieved publications. For more information, see https://scanr.enseignementsup-recherche.gouv.fr