-
-
Notifications
You must be signed in to change notification settings - Fork 204
[WIP]: normalize_shocks and normalize_levels allow user to impose true propositions on sims #1094
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
963f0c0
362d3de
387e57b
6f35645
c2fe146
0b11590
b4c668a
41f944d
8f0057e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -1794,6 +1794,20 @@ def transition(self): | |||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| # Calculate new states: normalized market resources and permanent income level | ||||||||||||||||||||||||
| pLvlNow = pLvlPrev*self.shocks['PermShk'] # Updated permanent income level | ||||||||||||||||||||||||
| # Asymptotically it can't hurt to impose true restrictions | ||||||||||||||||||||||||
| # (at least if the GICRaw holds) | ||||||||||||||||||||||||
| pLvlNowMean = 1.0 | ||||||||||||||||||||||||
| if not hasattr(self, "normalize_shocks"): | ||||||||||||||||||||||||
| self.normalize_shocks = False | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
| if not hasattr(self, "normalize_levels"): | ||||||||||||||||||||||||
| self.normalize_levels = False | ||||||||||||||||||||||||
|
|
||||||||||||||||||||||||
|
Comment on lines
+1802
to
+1805
|
||||||||||||||||||||||||
| if not hasattr(self, "normalize_levels"): | |
| self.normalize_levels = False | |
| if not hasattr(self, "normalize_levels"): | |
| self.normalize_levels = False |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using '== True' is redundant in Python. The idiomatic way to check boolean values is to use 'if self.normalize_levels:' instead of 'if self.normalize_levels == True:'.
| if self.normalize_levels == True: | |
| if self.normalize_levels: |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using hasattr checks in the transition method is problematic. These checks will only be executed once per simulation step, and if the attributes don't exist initially, they'll be set to False dynamically. This approach is fragile and unconventional. Instead, the parameters should be initialized properly in the agent's initialization or retrieved from the parameter dictionary. Consider using getattr with a default value instead, such as: self.normalize_shocks = getattr(self, 'normalize_shocks', False)
| if not hasattr(self, "normalize_shocks"): | |
| self.normalize_shocks = False | |
| if not hasattr(self, "normalize_levels"): | |
| self.normalize_levels = False | |
| if self.normalize_levels == True: | |
| # Use getattr with a default instead of dynamically creating attributes here | |
| if getattr(self, "normalize_levels", False): |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero issue. If all agents have pLvlNow equal to zero (or very close to zero), np.mean(pLvlNow) could be zero or near-zero, leading to a division by zero or numerical instability. This should be guarded against, perhaps by only normalizing when the mean is above a certain threshold.
| pLvlNowMean = np.mean(pLvlNow) | |
| pLvlNowMean = np.mean(pLvlNow) | |
| # Guard against zero, near-zero, or non-finite means to avoid division by zero | |
| if (not np.isfinite(pLvlNowMean)) or (abs(pLvlNowMean) < 1e-8): | |
| pLvlNowMean = 1.0 |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalize_levels parameter is used in the PerfForesightConsumerType.transition() method but is only added to init_idiosyncratic_shocks, not to init_perfect_foresight. This means that PerfForesightConsumerType agents won't have this parameter by default, and the hasattr check will always set it to False. If normalize_levels is intended to be used by PerfForesightConsumerType, it should also be added to init_perfect_foresight with a default value of False.
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new normalize_shocks and normalize_levels parameters lack test coverage. Given that this repository has comprehensive test suites for ConsIndShockModel (as seen in test_IndShockConsumerType.py), these new features should be tested to verify: (1) that normalization works correctly when enabled, (2) that simulations produce expected results with and without normalization, and (3) that edge cases (like near-zero means) are handled properly.
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new parameters normalize_shocks and normalize_levels are added to the init_idiosyncratic_shocks dictionary but are not documented in the class docstring. The IndShockConsumerType class docstring (lines 2061-2072) should be updated to document these new parameters, explaining what they do and when they should be used.
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using '== True' is redundant in Python. The idiomatic way to check boolean values is to use 'if self.normalize_shocks:' instead of 'if self.normalize_shocks == True:'.
| if self.normalize_shocks == True: | |
| if self.normalize_shocks: |
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalization logic may be mathematically incorrect. When normalize_shocks is True, the permanent shock is computed as (IncShks[0, :] * PermGroFacNow / PermShkMeanNow). However, PermShkMeanNow is calculated from the raw shock values (IncShks[0]), not from the values after multiplying by PermGroFacNow. This means the normalization is forcing the mean of the raw shocks to be 1, but the actual permanent shocks (which include PermGroFacNow) will have a mean of PermGroFacNow, not 1. If the intent is to normalize the final permanent shocks to have mean PermGroFacNow, then the calculation should be: PermShkMeanNow = np.mean(IncShks[0] * PermGroFacNow).
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero issue. If the sampled shocks have a mean of exactly zero (or very close to zero), dividing by PermShkMeanNow or TranShkMeanNow will cause division by zero or numerical instability. This should be guarded against, perhaps by only normalizing when the mean is above a certain threshold.
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using '== True' is redundant in Python. The idiomatic way to check boolean values is to use 'if self.normalize_shocks:' instead of 'if self.normalize_shocks == True:'.
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalization logic may be mathematically incorrect. When normalize_shocks is True, the permanent shock is computed as (IncShkDstnNow.X[0][EventDraws] * PermGroFacNow / PermShkMeanNow). However, PermShkMeanNow is calculated from the raw shock values (IncShkDstnNow.X[0][EventDraws]), not from the values after multiplying by PermGroFacNow. This means the normalization is forcing the mean of the raw shocks to be 1, but the actual permanent shocks (which include PermGroFacNow) will have a mean of PermGroFacNow, not 1. If the intent is to normalize the final permanent shocks to have mean PermGroFacNow, then the calculation should be: PermShkMeanNow = np.mean(IncShkDstnNow.X[0][EventDraws] * PermGroFacNow).
Copilot
AI
Jan 28, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero issue. If the sampled shocks have a mean of exactly zero (or very close to zero), dividing by PermShkMeanNow or TranShkMeanNow will cause division by zero or numerical instability. This should be guarded against, perhaps by only normalizing when the mean is above a certain threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The normalize_shocks attribute is checked and set in the transition() method (lines 1800-1801) but is never used anywhere in this method. This appears to be dead code. The normalize_shocks parameter is only relevant in the get_shocks() method of IndShockConsumerType, not in the transition() method of PerfForesightConsumerType. Consider removing these lines.