A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time

Tomas Björk, Agatha Murgoci

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

Abstrakt

We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
OriginalsprogEngelsk
TidsskriftFinance and Stochastics
Vol/bind18
Udgave nummer3
Sider (fra-til)545-592
ISSN0949-2984
DOI
StatusUdgivet - 2014

Emneord

  • Time consistency
  • Time inconsistency
  • Time-inconsistent control
  • Dynamic programming
  • Stochastic control
  • Bellman equation
  • Hyperbolic discounting
  • Mean–variance

Citationsformater