When Inaccuracies in Value Functions do not Propagate on Optima and Equilibria

Agnieszka Wiszniewska-Matyszkiel, Rajani Singh

Research output: Contribution to journalJournal articleResearchpeer-review

73 Downloads (Pure)

Abstract

We study general classes of discrete time dynamic optimization problems and dynamic games with feedback controls. In such problems, the solution is usually found by using the Bellman or Hamilton-Jacobi-Bellman equation for the value function in the case of dynamic optimization and a set of such coupled equations for dynamic games, which is not always possible accurately. We derive general rules stating what kind of errors in the calculation or computation of the value function do not result in errors in calculation or computation of an optimal control or a Nash equilibrium along the corresponding trajectory. This general result concerns not only errors resulting from using numerical methods but also errors resulting from some preliminary assumptions related to replacing the actual value functions by some a priori assumed constraints for them on certain subsets. We illustrate the results by a motivating example of the Fish Wars, with singularities in payoffs.
Original languageEnglish
Article number1109
JournalMathematics
Volume8
Issue number7
Number of pages24
ISSN2227-7390
DOIs
Publication statusPublished - Jul 2020

Keywords

  • Optimal control
  • Dynamic programming
  • Bellman equation
  • Dynamic games
  • Nash equilibria
  • Pareto optimality
  • Value function
  • Approximate solution
  • Singularity
  • Fish Wars

Cite this