Conversation
|
I'll take a look at this later today or early tomorrow. |
Made various changes to Adrian's notebook to make the method work as intended. Most of it had to do with selecting the grids properly, with some work on function representation. Should run without issue now.
|
@AMonninger I just pushed a commit that revises the notebook. The method works as intended now, and displays comparisons at the bottom. Important changes that I put in (there might be more):
The consumption functions constructed by the two integration methods now match as expected. The one using the new integration method is always a little lower than the old one, especially in the highly concave region of the cFunc. This is because the new method incorporates more uncertainty than the old one: as long as the theoretical probability of getting to some m from some b is positive, it is included in the expectation. That's not the case with a fixed set of integration nodes. Take a look at the notebook and let me know if you want to meet to discuss. I didn't fix all typos or style issues. |
|
Cool!
Looking forward to hearing about this-Fri?
Sent from Gmail Mobile
…On Tue, May 28, 2024 at 5:23 PM Matthew N. White ***@***.***> wrote:
@AMonninger <https://github.com/AMonninger> I just pushed a commit that
revises the notebook. The method works as intended now, and displays
comparisons at the bottom. Important changes that I put in (there might be
more):
- You either need to implement an artificial borrowing constraint *OR*
include a tiny probability of zero income. I put in UnempPrb=1e-10. Without
this, the minimum allowable bNrm will vary between the two methods.
- The mNrmGrid and bNrmGrid had incorrect offsets. This is critical.
- Small, but helped with debugging: use different grid sizes for b and
m so you can verify dimensions
- Did some variable relabeling to make it more accurate
- You were post-multiplying, not pre-multiplying
- Need to use the "pseudo-inverse trick" when representing the interim
marginal value function
- And needs to have a point added at 0 (or fix this to generalize it).
The consumption functions constructed by the two integration methods now
match as expected. The one using the new integration method is always *a
little lower* than the old one, especially in the highly concave region
of the cFunc. This is because the new method incorporates more uncertainty
than the old one: as long as the theoretical probability of getting to some
m from some b is positive, it is included in the expectation. That's not
the case with a fixed set of integration nodes.
Take a look at the notebook and let me know if you want to meet to
discuss. I didn't fix all typos or style issues.
—
Reply to this email directly, view it on GitHub
<#1434 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKCK72BWUZ62I6Y5YNZC7LZETYUVAVCNFSM6AAAAABINBOSWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZWGEZTIMZSGI>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
|
@mnwhite Thank you so much. Those changes are brilliant! |
|
In place of the pointmass for unemployment:
1. Impose a strict liquidity constraint
2. You can do a "mixture of normals" in which income in the unemployment
state is distributed normally at, say, half the level of permanent income
and with a density of 0.05 or something like that.
…On Wed, May 29, 2024 at 12:52 PM Adrian Monninger ***@***.***> wrote:
@mnwhite <https://github.com/mnwhite> Thank you so much. Those changes
are brilliant!
But if I understood you correctly, the method only works without
Unemployment Probability (eg no pointmass of the pdf at 0/'IncUnemp'). Is
there a way to circumvent that limitation?
—
Reply to this email directly, view it on GitHub
<#1434 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKCK73NRYTNJ3ZP5CM62P3ZEYBT5AVCNFSM6AAAAABINBOSWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZXHA3DCMZXG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
- Chris Carroll
|
|
Yes, what Chris suggests will absolutely work: just make "unemployment
income" have a continuous distribution with non-trivial stdev. In the
current (unreleased, but will be released soon) version of HARK, you just
need to make an alternate version of the IncShkDstn constructor that takes
additional parameters (which also need to be used to construct the big
transition matrix).
It might still be best to specify UnempPrb = 1e-12, even if you put a hard
borrowing constraint. When transitory shocks are integrated as if they
could go all the way down to zero, the natural borrowing constraint will be
zero anyway. The "tiny zero income event" just reduces some potential
headaches from adapting the solver for the new method.
On Wed, May 29, 2024 at 5:39 PM Christopher Llorracc Carroll <
***@***.***> wrote:
… In place of the pointmass for unemployment:
1. Impose a strict liquidity constraint
2. You can do a "mixture of normals" in which income in the unemployment
state is distributed normally at, say, half the level of permanent income
and with a density of 0.05 or something like that.
On Wed, May 29, 2024 at 12:52 PM Adrian Monninger ***@***.***>
wrote:
> @mnwhite <https://github.com/mnwhite> Thank you so much. Those changes
> are brilliant!
> But if I understood you correctly, the method only works without
> Unemployment Probability (eg no pointmass of the pdf at 0/'IncUnemp').
Is
> there a way to circumvent that limitation?
>
> —
> Reply to this email directly, view it on GitHub
> <#1434 (comment)>,
or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AAKCK73NRYTNJ3ZP5CM62P3ZEYBT5AVCNFSM6AAAAABINBOSWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZXHA3DCMZXG4>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
--
- Chris Carroll
—
Reply to this email directly, view it on GitHub
<#1434 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADKRAFKQTK5RYUTJVGVZ3ULZEZDJ5AVCNFSM6AAAAABINBOSWKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZYGMYDMOBTGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
|
I am going to work on this. We liked it, and then it fell off the map. |
There was a problem hiding this comment.
Pull request overview
This PR adds a new example notebook demonstrating how to construct the end-of-period marginal value function for IndShockConsumerType using a transition-matrix-based integration method, and compares it to the standard expectation-based method.
Changes:
- Introduces
IndShockConsumerType_IntegrationMethod_Example.ipynb, setting up a simple consumption-saving model with only transitory shocks. - Implements both the traditional expectation-based end-of-period marginal value function and an alternative integration/transition-matrix approach.
- Compares the implied consumption functions from both methods via plots to highlight differences.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "\n", | ||
| "def updateBM_TranMatrix(TranShkStd, bGrid, mGrid):\n", | ||
| " \"\"\"\n", | ||
| " Calculates the Probabillity of a transioty shock for the b times m matrix\n", |
There was a problem hiding this comment.
In updateBM_TranMatrix, the docstring has spelling mistakes ("Probabillity" and "transioty"), which can be confusing in user-facing example documentation. Please correct these to "probability" and "transitory".
| " Calculates the Probabillity of a transioty shock for the b times m matrix\n", | |
| " Calculates the probability of a transitory shock for the b times m matrix\n", |
| " # ### getting the probability for each transitory shock with the size b - m (remember m = b * transitory shock)\n", | ||
| "\n", | ||
| " ### Integration 1: No unemployment probability\n", | ||
| " s = TranShkStd\n", | ||
| " mu = -0.5 * s**2\n", | ||
| " lognorm_dist = sp.stats.lognorm(s, scale=np.exp(mu))\n", | ||
| "\n", | ||
| " ### Create matrix\n", | ||
| " # Construct meshgrid of bNrmGrid_income and mNrmGrid_income\n", | ||
| " b, m = np.meshgrid(bGrid, mGrid, indexing=\"ij\")\n", | ||
| "\n", | ||
| " # Calculate differences between corresponding elements\n", | ||
| " probGrid = lognorm_dist.pdf(m - b)\n", |
There was a problem hiding this comment.
Within updateBM_TranMatrix, the comment says "remember m = b * transitory shock", but the code computes probGrid = lognorm_dist.pdf(m - b), which corresponds to an additive rather than multiplicative transitory shock and is inconsistent with the stated relationship. This mismatch is likely a conceptual error in constructing the transition matrix and should be resolved by aligning the PDF argument with the intended mapping between m and b.
| "\n", | ||
| " Parameters\n", | ||
| " ----------\n", | ||
| " shocks: [float]\n", | ||
| " Permanent and transitory income shock levels.\n", | ||
| " a_nrm: float\n", | ||
| " Normalized market assets this period\n", | ||
| "\n", | ||
| " Returns\n", | ||
| " -------\n", | ||
| " float\n", | ||
| " normalized market resources in the next period\n", |
There was a problem hiding this comment.
The b_nrm_next docstring states that it returns "normalized market resources in the next period", but the implementation only computes next-period normalized bank balances (before adding the transitory shock), which is inconsistent with the description. Please update the docstring to describe bank balances (or "b") rather than full market resources, or adjust the implementation to match the documented behavior.
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " shocks: [float]\n", | |
| " Permanent and transitory income shock levels.\n", | |
| " a_nrm: float\n", | |
| " Normalized market assets this period\n", | |
| "\n", | |
| " Returns\n", | |
| " -------\n", | |
| " float\n", | |
| " normalized market resources in the next period\n", | |
| "\n", | |
| " Parameters\n", | |
| " ----------\n", | |
| " shocks: [float]\n", | |
| " Permanent and transitory income shock levels.\n", | |
| " a_nrm: float\n", | |
| " Normalized market assets this period\n", | |
| "\n", | |
| " Returns\n", | |
| " -------\n", | |
| " float\n", | |
| " normalized bank balances in the next period\n", |
| "### Let ist start from minimum of aNrm\n", | ||
| "# NO, WRONG MINIMUM!\n", |
There was a problem hiding this comment.
The comment # NO, WRONG MINIMUM! indicates that shifting mGrid and bGrid by mNrmMinNext and aNrmNow[0] is incorrect, but the code still applies these shifts and uses the resulting grids. Either fix the lower bounds used for mGrid/bGrid or remove/clarify the comment so that the implementation and commentary are consistent.
| "### Let ist start from minimum of aNrm\n", | |
| "# NO, WRONG MINIMUM!\n", | |
| "### Shift grids so that mGrid starts at mNrmMinNext and bGrid at aNrmNow[0]\n", |
Example notebook of Constructing End of Period (Marginal) Value Function using Transitionmatrices instead of Expectations.
This example notebook tests the idea of @mnwhite . So far, the two methods do NOT construct the same function. Could @mnwhite check for conceptual/ coding errors?
Please ensure your pull request adheres to the following guidelines: