Fix CRRA=1.0 division by zero errors across consumption models#1587
Fix CRRA=1.0 division by zero errors across consumption models#1587
Conversation
Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
- ConsIndShockModelFast.py: Fixed vFuncNvrsSlope and vNvrsP expressions - ConsBequestModel.py: Fixed vNvrsP_temp and MPCminNvrs expressions - ConsGenIncProcessModel.py: Fixed MPCminNvrs expression All fixes use the same epsilon approach (CRRA_safe = 1.0 + 1e-8) for consistency.
Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
|
@copilot sync with main and review your work |
There was a problem hiding this comment.
Pull request overview
This PR addresses division by zero errors that occur when CRRA (Coefficient of Relative Risk Aversion) is set to 1.0 across multiple consumption/savings models in the HARK toolkit. The fix adds conditional checks to detect CRRA=1.0 and uses a small epsilon (1e-8) workaround to avoid the mathematical singularity in expressions of the form (-CRRA / (1.0 - CRRA)).
Key changes:
- Implements epsilon-based workaround for CRRA=1.0 across 5 consumption model files
- Adds comprehensive test coverage for CRRA=1.0 edge case in PerfForesightConsumerType
- Updates .gitignore to prevent backup files from being committed
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/ConsumptionSaving/test_PerfForesightConsumerType.py | Adds test case verifying CRRA=1.0 doesn't cause ZeroDivisionError and tests values near 1.0 |
| HARK/ConsumptionSaving/ConsPrefShockModel.py | Adds CRRA=1.0 checks in two solver functions to prevent division by zero |
| HARK/ConsumptionSaving/ConsIndShockModelFast.py | Adds CRRA=1.0 checks in Numba-optimized solver functions |
| HARK/ConsumptionSaving/ConsIndShockModel.py | Adds CRRA=1.0 checks with user warning in three solver functions |
| HARK/ConsumptionSaving/ConsGenIncProcessModel.py | Adds CRRA=1.0 check in general income process solver |
| HARK/ConsumptionSaving/ConsBequestModel.py | Adds CRRA=1.0 check in bequest model solver (incomplete fix) |
| .gitignore | Adds *.backup pattern to prevent backup files from being committed |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
…move imports to top Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
Synced with main and reviewed the code. Made the following improvements in commits 466b83e and 9545127:
|
|
@copilot This is all incorrect, the point is to allow crra = 1 exactly by adding alternative conditional formulations , not to overwrite when the user enters crra =1. What are the actual solutions when Crra is exactly q? |
You're right - my epsilon approach was incorrect. I've reverted those changes. For CRRA=1 (log utility), I need to implement the proper limiting formulations. The problematic expressions are of the form From my analysis of the utility functions (which already handle CRRA=1), I see patterns like:
Could you please confirm what the correct closed-form expression should be for |
Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
|
This really isn't something that I think copilot can handle; I don't know what the correct solution is. See the discussion added to #75 . The short of it is that the AI is not going to be able to work out what to do because it's only looking at the code for the solution, not the original problem and the math that was used to reach that solution. |
|
challenge accepted, @copilot you need to go back to the drawing board and do all the math from unnormalized version, what happens when CRRA = 1 , what is the normalized version of the value function? my guess is there is no difference between normalized and unnormalized (other than dropping P) because P is additive in logs, aka log(C) = log(c) + log(P) |
…t formula Added vNvrsSlope() function to HARK/rewards.py that returns MPC when CRRA=1 instead of the divergent formula MPC^(-CRRA/(1-CRRA)). This implements the mathematically correct limit for log utility where the pseudo-inverse value function slope equals MPC. Updated all consumption models to use the new function: - ConsIndShockModel.py - ConsPrefShockModel.py - ConsBequestModel.py - ConsGenIncProcessModel.py - ConsIndShockModelFast.py (inline conditionals for numba compatibility) - ConsMarkovModel.py - ConsRiskyAssetModel.py Added test case for CRRA=1 in test_PerfForesightConsumerType.py Fixes #75 Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
…ent float comparison Co-authored-by: alanlujan91 <5382704+alanlujan91@users.noreply.github.com>
Done in 22e7342. Worked through the math as you suggested. For log utility (CRRA=1), starting from the unnormalized value function:
So v(m) = V/P = log(m) (plus additive constants involving log(P)). For the pseudo-inverse value function slope:
The expression MPC^(-ρ/(1-ρ)) diverges as ρ→1, but the properly derived limit for log utility is simply MPC. Added |
…ty support Replaces manual CRRA utility implementations with the CRRAutility function across models and tests. Adds CRRAWealthUtility and CRRAWealthUtilityP functions to HARK.rewards for Cobb-Douglas utility with wealth, and updates relevant model logic to use these. Improves vNvrsSlope and related utility functions to handle CRRA=1 (log utility) cases robustly. Adds comprehensive unit tests for vNvrsSlope, including edge cases and array inputs.
Corrects handling of CRRA=1 (log utility) in value function slope calculations and related utility functions to avoid ZeroDivisionError. Adds input validation to vNvrsSlope, clarifies documentation, and expands test coverage for CRRA=1 and near-1 cases, including roundtrip and numerical derivative checks for utility functions. Updates YAML model documentation to note CRRA=1 limitation in formula.
Removed input validation checks from the vNvrsSlope function in rewards.py and updated its usage in ConsIndShockModelFast.py to use the new implementation. Adjusted numba_tools.py to JIT-compile the updated vNvrsSlope, and removed related input validation tests from test_rewards.py.
This pull request refactors several consumption-saving models to improve code clarity and numerical stability, especially for the case where CRRA (risk aversion) equals 1 (log utility). The main changes involve replacing hard-coded CRRA utility formulas with standardized utility functions from
HARK.rewards, and introducing or using thevNvrsSlopehelper to handle edge cases. Additionally, redundant utility function definitions are removed in favor of shared implementations.Utility function standardization and edge case handling:
c ** (1 - CRRA) / (1 - CRRA)) with calls toCRRAutilityfromHARK.rewardsin all model files, includingconsumer.py,fisher.py,perfect_foresight.py, andperfect_foresight_normalized.py. This ensures correct handling of the CRRA=1 (log utility) case. [1] [2] [3] [4] [5]Refactoring of value function slope calculations:
MPC ** (-CRRA / (1.0 - CRRA))) with the newvNvrsSlopehelper fromHARK.rewardsin all relevant models, ensuring correct and stable behavior for CRRA=1. [1] [2] [3] [4] [5] [6] [7] [8] [9]vNvrsSlopeto imports in all affected files. [1] [2] [3] [4] [5] [6]Model-specific utility function cleanup:
ConsWealthPortfolioModel.pyandConsWealthUtilityModel.py, replacing them with shared implementations fromHARK.rewards. [1] [2]CRRAWealthUtility,CRRAWealthUtilityP, and related helpers fromHARK.rewardsfor consistency and maintainability. [1] [2] [3] [4]Numba-accelerated model adjustments:
General codebase improvements:
HARK.rewardsas needed. [1] [2] [3] [4] [5] [6]These changes collectively improve correctness, code reuse, and numerical stability across the codebase.