A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time

Tomas Björk, Agatha Murgoci

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
LanguageEnglish
JournalFinance and Stochastics
Volume18
Issue number3
Pages545-592
ISSN0949-2984
DOIs
StatePublished - 2014

Keywords

    Cite this

    Björk, Tomas ; Murgoci, Agatha. / A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time. In: Finance and Stochastics. 2014 ; Vol. 18, No. 3. pp. 545-592
    @article{a5041cce671e41548bab8b5b3eb320ad,
    title = "A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time",
    abstract = "We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.",
    keywords = "Time consistency, Time inconsistency, Time-inconsistent control, Dynamic programming, Stochastic control, Bellman equation, Hyperbolic discounting, Mean–variance",
    author = "Tomas Bj{\"o}rk and Agatha Murgoci",
    year = "2014",
    doi = "10.1007/s00780-014-0234-y",
    language = "English",
    volume = "18",
    pages = "545--592",
    journal = "Finance and Stochastics",
    issn = "0949-2984",
    publisher = "Springer",
    number = "3",

    }

    A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time. / Björk, Tomas; Murgoci, Agatha.

    In: Finance and Stochastics, Vol. 18, No. 3, 2014, p. 545-592.

    Research output: Contribution to journalJournal articleResearchpeer-review

    TY - JOUR

    T1 - A Theory of Markovian Time-inconsistent Stochastic Control in Discrete Time

    AU - Björk,Tomas

    AU - Murgoci,Agatha

    PY - 2014

    Y1 - 2014

    N2 - We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.

    AB - We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.

    KW - Time consistency

    KW - Time inconsistency

    KW - Time-inconsistent control

    KW - Dynamic programming

    KW - Stochastic control

    KW - Bellman equation

    KW - Hyperbolic discounting

    KW - Mean–variance

    U2 - 10.1007/s00780-014-0234-y

    DO - 10.1007/s00780-014-0234-y

    M3 - Journal article

    VL - 18

    SP - 545

    EP - 592

    JO - Finance and Stochastics

    T2 - Finance and Stochastics

    JF - Finance and Stochastics

    SN - 0949-2984

    IS - 3

    ER -