Communications in Mathematical Sciences

Volume 21 (2023)

Number 8

Operator shifting for model-based policy evaluation

Pages: 2169 – 2193

DOI: https://dx.doi.org/10.4310/CMS.2023.v21.n8.a5

Authors

Xun Tang (Institute for Computational and Mathematical Engineering, Stanford University, Stanford, California, U.S.A.)

Lexing Ying (Department of Mathematics and Institute for Computational & Mathematical Engineering, Stanford University, Stanford, California, U.S.A.)

Yuhua Zhu (§Department of Mathematics and Halicioğlu Data Science Institute, University of California at San Diego, La Jolla, Calif., U.S.A.)

Abstract

In model-based reinforcement learning, the transition matrix and reward vector are often estimated from random samples subject to noise. Even if the estimated model is an unbiased estimate of the true underlying model, the value function computed from the estimated model is biased. We introduce an operator shifting method for reducing the error introduced by the estimated model. When the error is in the residual norm, we prove that the shifting factor is always positive and upper bounded by $1+O (1/n)$, where $n$ is the number of samples used in learning each row of the transition matrix. We also propose a practical numerical algorithm for implementing the operator shifting.

Keywords

operator shifting, model-based reinforcement learning, policy evaluation, noisy matrices

2010 Mathematics Subject Classification

15B51, 90C40

Received 4 October 2022

Received revised 7 February 2023

Accepted 2 March 2023

Published 15 November 2023