Subscribe Now Subscribe Today
Science Alert
Curve Top
Journal of Applied Sciences
  Year: 2009 | Volume: 9 | Issue: 11 | Page No.: 2056-2066
DOI: 10.3923/jas.2009.2056.2066
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail

Multiagent Reniforcement Learning in Extensive Form Games with Perfect Information

A. Akramizadeh, A. Afshar and Mohammad-B Menhaj

In this study, Q-learning has been extended to multiagent systems where a kind of ranking in action selection has been set among several self-interested agents. The process of learning is regarded as a sequence of situations modeled as extensive form games with perfect information. Each agent decides on its actions, in different subgames the higher level agents have decided on, based on its preferences affected by the lower level agents’ preferences. These modified Q-values, called associative Q-values, are the estimations of possible utilities gained over a subgame with respect to the lower level agents’game preferences. A kind of social convention can be addressed in extensive form games providing the ability to better deal with multiplicity in equilibrium points as well as decreasing complexity of computations with respect to normal form games. This new process is called extensive Markov game which is proved to be a kind of generalized Markov decision process. It is also provided a comprehensive review on the related concepts and definitions previously developed for normal form games. Some analytical discussions on the convergence and the computation space are also included. A numerical example affords more elaboration on the proposed method.
PDF Fulltext XML References Citation Report Citation
How to cite this article:

A. Akramizadeh, A. Afshar and Mohammad-B Menhaj, 2009. Multiagent Reniforcement Learning in Extensive Form Games with Perfect Information. Journal of Applied Sciences, 9: 2056-2066.

DOI: 10.3923/jas.2009.2056.2066






Curve Bottom