|
5 | 5 | "id": "96dfe427-942a-47e9-8f1f-91854989b8c8", |
6 | 6 | "metadata": {}, |
7 | 7 | "source": [ |
8 | | - "# 3) Agent and standard notions of extensive form games\n", |
| 8 | + "# Agent and standard notions of extensive form games\n", |
9 | 9 | "\n", |
10 | 10 | "The purpose of this tutorial is to explain the notions of `MixedBehaviorProfile.agent_max_regret` and `MixedBehaviorProfile.agent_liap_value`, and the corresponding solvers `Gambit.nash.enumpure_agent_solve` and `Gambit.nash.liap_agent_solve`. These notions are only relevant for *extensive-form games*, and so `agent_max_regret` and \n", |
11 | 11 | "`agent_liap_value` are only available for `MixedBehaviorProfile`s and not for `MixedStrategyProfile`s." |
|
23 | 23 | "A player's regret is 0 if they are playing a mixed (including pure) best response; otherwise it is positive and \n", |
24 | 24 | "is the different between the best response payoff (achievable via a pure strategy) against the other players' and what the player actually gets as payoff in this profile.\n", |
25 | 25 | "\n", |
26 | | - "Let's see an example." |
| 26 | + "Let's see an example taken from [Myerson (1991)](#references)." |
27 | 27 | ] |
28 | 28 | }, |
29 | 29 | { |
|
375 | 375 | "id": "c88d08e2-33bf-48ad-b71f-4a0c19929fdc", |
376 | 376 | "metadata": {}, |
377 | 377 | "source": [ |
378 | | - "The method `Gambit.nash.liap_solve` essentially looks for *local* minima of the function from profiles to the Liapunov value. The set of Nash equilibria are exactly the *global* minima of this function, which is why `liap_solve` may not return a Nash equilibrium." |
| 378 | + "As we have seen, both the maximum regret and Liapunov value of a profile are non-negative and zero if and only if the profile is a Nash equilibrium. When positive, one can think of both notions as describing how close one is to an equilibrium.\n", |
| 379 | + "\n", |
| 380 | + "Based on this idea, the method `Gambit.nash.liap_solve` looks for *local* minima of the function from profiles to the Liapunov value. The set of Nash equilibria are exactly the *global* minima of this function, where the value is 0, but `liap_solve` may terminate at a non-global, local minimum, which is not a Nash equilibrium." |
379 | 381 | ] |
380 | 382 | }, |
381 | 383 | { |
|
667 | 669 | "id": "c4eeb65f", |
668 | 670 | "metadata": {}, |
669 | 671 | "source": [ |
670 | | - "To conclude, we note that, for most use cases, the standard non-agent versions are probably what a user wants. The agent versions have applications in the area of \"equilibrium refinements\"; for more details see [Myerson (1991)](#references)." |
| 672 | + "To conclude, we note that, for most use cases, the standard non-agent versions are probably what a user wants. The agent versions have applications in the area of \"equilibrium refinements\", in particular for \"sequential equilibria\"; for more details see Chapter 4, \"Sequential Equilibria of Extensive-Form Games\", in [Myerson (1991)](#references)." |
671 | 673 | ] |
672 | 674 | }, |
673 | 675 | { |
|
0 commit comments