Conversation
needs updating; also it's very slow
|
#1242 is about the EconForge interpolator as an additional backend to multivariate interpolation? |
llorracc
left a comment
There was a problem hiding this comment.
Small comment: My pref is to be more systematic about naming, and have the general followed by the specific. Like, it should be interp_bilinear, interp_decay, interp_linear. Also, is LinearFast an interpolator? If so it should be interp_linear_fast
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #1203 +/- ##
==========================================
- Coverage 72.55% 72.01% -0.54%
==========================================
Files 78 85 +7
Lines 13009 13391 +382
==========================================
+ Hits 9439 9644 +205
- Misses 3570 3747 +177
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Pull request overview
This pull request performs a major refactoring of the interpolation functionality in HARK, reorganizing code from a single module (HARK.econforgeinterp) into a structured package (HARK.interpolation) with multiple submodules. The refactoring introduces new interpolation backends using scipy and scikit-learn, alongside the existing econforge and HARK-native implementations.
Changes:
- Reorganizes interpolation code into a package structure with submodules:
_econforge,_hark,_multi,_scipy, and_sklearn - Adds new dependencies:
scikit-learn,recommonmark>=0.7, and version constraints onquantecon>=0.6 - Updates import paths in examples and documentation from
HARK.econforgeinterptoHARK.interpolation - Creates comprehensive test suite for each interpolation backend
Reviewed changes
Copilot reviewed 15 out of 17 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| requirements/base.txt | Adds scikit-learn, recommonmark, and version constraint for quantecon |
| environment.yml | New conda environment file with package dependencies |
| examples/Interpolation/README.md | New README linking to external examples |
| examples/Interpolation/DecayInterp.py | Updates import from econforgeinterp to interpolation |
| examples/Interpolation/DecayInterp.ipynb | Updates import path in notebook |
| HARK/interpolation/init.py | New package init with wildcard imports from submodules |
| HARK/interpolation/_econforge.py | Refactored econforge interpolation classes |
| HARK/interpolation/_hark.py | Refactored HARK-native interpolation classes (5192 lines) |
| HARK/interpolation/_multi.py | New multivariate interpolation implementations |
| HARK/interpolation/_scipy.py | New scipy-based interpolation wrappers |
| HARK/interpolation/_sklearn.py | New scikit-learn based interpolation classes |
| HARK/interpolation/tests/test_*.py | Comprehensive test suite for all backends |
| Documentation/reference/tools/econforgeinterp.rst | Updates documentation module path |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pandas>=1.5 | ||
| quantecon | ||
| quantecon>=0.6 | ||
| recommonmark>=0.7 |
There was a problem hiding this comment.
The dependency recommonmark>=0.7 appears to be a documentation-related package. It should likely be in a separate documentation requirements file rather than in base.txt, as it's not needed for runtime functionality.
| recommonmark>=0.7 |
| quantecon | ||
| quantecon>=0.6 | ||
| recommonmark>=0.7 | ||
| scikit-learn |
There was a problem hiding this comment.
The scikit-learn dependency is added without a version constraint. This could lead to compatibility issues. Consider specifying a minimum version (e.g., scikit-learn>=1.0).
| scikit-learn | |
| scikit-learn>=1.0 |
| - numba>=0.56 | ||
| - numpy>=1.23 | ||
| - pandas>=1.5 | ||
| - quantecon |
There was a problem hiding this comment.
In environment.yml, quantecon does not have a version constraint while base.txt specifies quantecon>=0.6. For consistency and reproducibility, consider adding the same version constraint here.
| - quantecon | |
| - quantecon>=0.6 |
| - pandas>=1.5 | ||
| - quantecon | ||
| - scipy>=1.10 | ||
| - scikit-learn |
There was a problem hiding this comment.
In environment.yml, scikit-learn is listed without a version constraint, but it's a critical dependency for the sklearn interpolation module. Consider adding a minimum version constraint for consistency with other dependencies.
| - scikit-learn | |
| - scikit-learn>=1.2 |
| --------------- | ||
|
|
||
| .. automodule:: HARK.econforgeinterp | ||
| .. automodule:: HARK.interpolation.econforgeinterp |
There was a problem hiding this comment.
The documentation reference path is updated from HARK.econforgeinterp to HARK.interpolation.econforgeinterp, but based on the new module structure, it should be HARK.interpolation._econforge since the submodules use underscore prefixes. Verify this path is correct or if the automodule directive should reference the public API through HARK.interpolation.
| .. automodule:: HARK.interpolation.econforgeinterp | |
| .. automodule:: HARK.interpolation._econforge |
| from HARK.interpolation._econforge import * | ||
| from HARK.interpolation._hark import * | ||
| from HARK.interpolation._multi import * | ||
| from HARK.interpolation._scipy import * | ||
| from HARK.interpolation._sklearn import * |
There was a problem hiding this comment.
Using wildcard imports (from module import *) in init.py can lead to namespace pollution and make it unclear which symbols are being exported. Consider either: 1) explicitly listing the imports, or 2) defining __all__ in each submodule to control what gets exported. This is especially important when combining multiple modules that might have overlapping names.
| from HARK.interpolation._econforge import * | |
| from HARK.interpolation._hark import * | |
| from HARK.interpolation._multi import * | |
| from HARK.interpolation._scipy import * | |
| from HARK.interpolation._sklearn import * | |
| import importlib | |
| _submodules = [ | |
| "HARK.interpolation._econforge", | |
| "HARK.interpolation._hark", | |
| "HARK.interpolation._multi", | |
| "HARK.interpolation._scipy", | |
| "HARK.interpolation._sklearn", | |
| ] | |
| # Dynamically re-export public names from the submodules. | |
| # This mimics "from <module> import *" behavior: | |
| # - If a submodule defines __all__, use that. | |
| # - Otherwise, export all attributes that do not start with "_". | |
| for _mod_name in _submodules: | |
| _mod = importlib.import_module(_mod_name) | |
| _names = getattr(_mod, "__all__", None) | |
| if _names is None: | |
| _names = [n for n in dir(_mod) if not n.startswith("_")] | |
| for _name in _names: | |
| globals()[_name] = getattr(_mod, _name) | |
| # Optionally, define __all__ for this package to list exported names. | |
| __all__ = [n for n in globals() if not n.startswith("_")] |
| else: | ||
| raise AttributeError(f"Feature {feature} not recognized.") |
There was a problem hiding this comment.
The logic at lines 108-120 has a problematic flow. When feature is None or not a string (line 108), the code jumps to the else block at line 119 which raises an AttributeError. However, when feature is None, this error is misleading since None is a valid case according to the docstring. The else block at line 119 should be removed, and only the inner else at line 117 should raise the error for unrecognized string features.
| else: | |
| raise AttributeError(f"Feature {feature} not recognized.") |
| numpy>=1.23 | ||
| pandas>=1.5 | ||
| quantecon | ||
| quantecon>=0.6 |
There was a problem hiding this comment.
The version constraint quantecon>=0.6 may be too loose. Consider specifying a more precise version range (e.g., quantecon>=0.6,<1.0) to avoid potential breaking changes in future major versions.
| quantecon>=0.6 | |
| quantecon>=0.6,<1.0 |
| self.pipeline = pipeline | ||
|
|
||
| X_train = np.reshape(self.grids, (self.ndim, -1)) | ||
| y_train = np.mgrid[[slice(0, dim) for dim in self.shape]] |
There was a problem hiding this comment.
Please ensure your pull request adheres to the following guidelines: