|
413 | 413 | "id": "e67d9926-d19d-4745-a406-a3c1198a8484", |
414 | 414 | "metadata": {}, |
415 | 415 | "source": [ |
416 | | - "Since the maximum regret and therefore Liapunov value are both positive, the starting profile is not a Nash equilibrium\n", |
417 | | - "and we expect `liap_solve` to return a different profile, which will hopefully, but not necessarily by a Nash equilibrium, depending on whether the solver finding a global minimum, or a non-global local minimum." |
| 416 | + "It could be a useful exercise to make sure that you can compute these values of the maximum regret and Liapunov value. For that, the starting point would be computing the reduced strategic form. " |
| 417 | + ] |
| 418 | + }, |
| 419 | + { |
| 420 | + "cell_type": "markdown", |
| 421 | + "id": "e799eded-c6e1-4a3e-80cb-953c52627762", |
| 422 | + "metadata": {}, |
| 423 | + "source": [ |
| 424 | + "Returning to `liap_solve`, since the maximum regret and therefore Liapunov value are both positive, the starting profile is not a Nash equilibrium and we expect `liap_solve` to return a different profile, which will hopefully, but not necessarily by a Nash equilibrium, depending on whether the solver finding a global minimum, or a non-global local minimum." |
418 | 425 | ] |
419 | 426 | }, |
420 | 427 | { |
|
544 | 551 | "source": [ |
545 | 552 | "# Agent maximum regret versus standard maximum regret\n", |
546 | 553 | "\n", |
547 | | - "Now we can introduce the \"agent\" versions of these two notions, maximum regret and the Liapunov value.\n", |
548 | | - "\n", |
549 | | - "Both notions relate to what Myerson (1991) called the \"multi-agent representation\" of an extensive form game, in which each information set is treated as an individual \"agent\". The \"agent maximum regret\" is then either 0 or the largest over the \"action regrets\" if that is positive.\n", |
| 554 | + "Now we can introduce the \"agent\" versions of both of the notions, maximum regret and the Liapunov value. The \"agent\" versions relate to what [Myerson (1991)](#references) called the \"multi-agent representation\" of an extensive form game, in which each information set is treated as an individual \"agent\". The \"agent maximum regret\" is then either 0 (if every information set has regret 0, i.e. `infoset_regret` 0), or it is largest of the information set regrets, which is then necessarily positive.\n", |
550 | 555 | "\n", |
551 | 556 | "The maximum regret of a profile is at least at large as the agent maximum regret. \n", |
552 | | - "The reason it can be larger is because under the standard notion a player may control multiple information sets and can deviate by changing actions at more than one of these information sets at once, whereas for agent maximum regret, only an \"agent\" deviation at a single information set is allowed.\n", |
| 557 | + "In short, the reason it cannot be smaller is that all possible deviations of a given player -- even those that require changing behavior at multiple information sets -- are considered.\n", |
| 558 | + "In particular, that includes deviations at a single information set, or at more than one.\n", |
| 559 | + "On the other hand, the agent maximum regret only considers deviations at a single information set at a time, by considering each such information set as an \"agent\".\n", |
| 560 | + "\n", |
553 | 561 | "Thus, **if the maximum regret is 0, then we have a Nash equilibrium, and the agent maximum regret will be 0 too**.\n", |
554 | 562 | "However, **there are examples where a profile has agent maximum regret of 0 but positive maximum regret**, so the profile is \n", |
555 | 563 | "not a Nash equilibrium.\n", |
|
572 | 580 | "output_type": "stream", |
573 | 581 | "text": [ |
574 | 582 | "2\n", |
575 | | - "[[[[Rational(1, 1), Rational(0, 1)], [Rational(0, 1), Rational(1, 1)]], [[Rational(0, 1), Rational(1, 1)]]], [[[Rational(0, 1), Rational(1, 1)], [Rational(0, 1), Rational(1, 1)]], [[Rational(1, 1), Rational(0, 1)]]]]\n" |
| 583 | + "[[[Rational(1, 1), Rational(0, 1)], [Rational(0, 1), Rational(1, 1)]], [[Rational(0, 1), Rational(1, 1)]]]\n", |
| 584 | + "[[[Rational(0, 1), Rational(1, 1)], [Rational(0, 1), Rational(1, 1)]], [[Rational(1, 1), Rational(0, 1)]]]\n" |
576 | 585 | ] |
577 | 586 | } |
578 | 587 | ], |
579 | 588 | "source": [ |
580 | 589 | "pure_agent_equilibria = gbt.nash.enumpure_agent_solve(g).equilibria\n", |
581 | 590 | "print(len(pure_agent_equilibria))\n", |
582 | | - "print(pure_agent_equilibria)" |
| 591 | + "for agent_eq in pure_agent_equilibria:\n", |
| 592 | + " print(agent_eq)" |
583 | 593 | ] |
584 | 594 | }, |
585 | 595 | { |
|
621 | 631 | }, |
622 | 632 | { |
623 | 633 | "cell_type": "code", |
624 | | - "execution_count": 13, |
| 634 | + "execution_count": 15, |
625 | 635 | "id": "85760cec-5760-4f9d-8ca2-99fba79c7c3c", |
626 | 636 | "metadata": {}, |
627 | 637 | "outputs": [ |
628 | 638 | { |
629 | 639 | "name": "stdout", |
630 | 640 | "output_type": "stream", |
631 | 641 | "text": [ |
632 | | - "1 1\n", |
633 | | - "0 0\n" |
| 642 | + "Max regret: 1\n", |
| 643 | + "Liapunov value: 1\n", |
| 644 | + "Agent max regret 0\n", |
| 645 | + "Agent Liapunov value: 0\n" |
634 | 646 | ] |
635 | 647 | } |
636 | 648 | ], |
637 | 649 | "source": [ |
638 | 650 | "aeq = pure_agent_equilibria[1]\n", |
639 | | - "print(aeq.max_regret(), aeq.liap_value())\n", |
640 | | - "print(aeq.agent_max_regret(), aeq.agent_liap_value())" |
| 651 | + "print(\"Max regret:\", aeq.max_regret())\n", |
| 652 | + "print(\"Liapunov value:\", aeq.liap_value())\n", |
| 653 | + "print(\"Agent max regret\", aeq.agent_max_regret())\n", |
| 654 | + "print(\"Agent Liapunov value:\", aeq.agent_liap_value())" |
| 655 | + ] |
| 656 | + }, |
| 657 | + { |
| 658 | + "cell_type": "markdown", |
| 659 | + "id": "a42f18d7-5fb4-4a45-9afd-76a63477ef1d", |
| 660 | + "metadata": {}, |
| 661 | + "source": [ |
| 662 | + "It is a useful exercise to make sure you can confirm that the pure profile `pure_agent_equilibria[1]` indeed has these values of agent and standard maximum regret and Liapunov value." |
641 | 663 | ] |
642 | 664 | }, |
643 | 665 | { |
644 | 666 | "cell_type": "markdown", |
645 | 667 | "id": "c4eeb65f", |
646 | 668 | "metadata": {}, |
647 | 669 | "source": [ |
648 | | - "For most use cases, the non-agent versions are probably what a user wants." |
| 670 | + "To conclude, we note that, for most use cases, the standard non-agent versions are probably what a user wants. The agent versions have applications in the area of \"equilibrium refinements\"; for more details see [Myerson (1991)](#references)." |
649 | 671 | ] |
650 | 672 | }, |
651 | 673 | { |
|
0 commit comments