Communications in Information and Systems

Volume 7 (2007)

Number 1

A survey of some simulation-based algorithms for Markov decision processes

Pages: 59 – 92

DOI: http://dx.doi.org/10.4310/CIS.2007.v7.n1.a4

Authors

Hyeong Soo Chang

Michael C. Fu

Jiaqiao Hu

Steven I. Marcus

Abstract

Many problems modeled by Markov decision processes (MDPs) have very large state and/or action spaces, leading to the well-known curse of dimensionality that makes solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to explicitly specify some of the MDP model parameters, but simulated sample paths can be readily generated (e.g., for random state transitions and rewards), albeit at a non-trivial computational cost. For these settings, we have developed various sampling and population-based numerical algorithms to overcome the computational difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches presented in this survey include multi-stage adaptive sampling, evolutionary policy iteration and evolutionary random policy search.

Keywords

(adaptive) sampling; Markov decision process; population-based algorithms

Full Text (PDF format)