Abstrakt
We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Finance and Stochastics |
Vol/bind | 18 |
Udgave nummer | 3 |
Sider (fra-til) | 545-592 |
ISSN | 0949-2984 |
DOI | |
Status | Udgivet - 2014 |
Emneord
- Time consistency
- Time inconsistency
- Time-inconsistent control
- Dynamic programming
- Stochastic control
- Bellman equation
- Hyperbolic discounting
- Mean–variance