Skip to content

model comparision SDG#1573

Open
alanlujan91 wants to merge 3 commits intoecon-ark:mainfrom
alanlujan91:model_comp
Open

model comparision SDG#1573
alanlujan91 wants to merge 3 commits intoecon-ark:mainfrom
alanlujan91:model_comp

Conversation

@alanlujan91
Copy link
Member

@alanlujan91 alanlujan91 commented Jun 26, 2025

This pull request introduces a comprehensive model comparison infrastructure for the HARK library, enabling systematic evaluation of solution methods for heterogeneous agent models. The changes include the addition of a README.md to document the framework, new modules for metrics, parameter translation, and solution adapters, as well as the implementation of an adapter for the Aiyagari model. These updates enhance the usability and extensibility of the library for computational economics research.

Documentation and Framework Overview:

  • HARK/comparison/README.md: Added detailed documentation outlining the purpose, components, and usage of the model comparison infrastructure, including examples for comparing solutions and loading external solutions.

Core Framework Components:

Solution Method Adapters:


Note

Adds a unified HARK model comparison infrastructure with solution adapters, metrics, parameter translation, baseline save/load/compare, block-model definitions, examples, and tests.

  • Core Framework:
    • ModelComparison in HARK/comparison/base.py orchestrates solving, simulation, metrics, and reporting.
    • EconomicMetrics in HARK/comparison/metrics.py provides Euler/Bellman errors, Den Haan–Marcet R², and wealth distribution stats.
  • Baseline Workflows:
    • New methods in ModelComparison: save_baseline_solution(), load_baseline_solution(), compare_against_baseline(), plus listing/clearing.
  • Adapters (HARK/comparison/adapters/):
    • Base SolutionAdapter and concrete adapters: HARKAdapter, SSJAdapter, MaliarAdapter, ExternalAdapter, AiyagariAdapter.
    • BlockAdapter enables block-model integration and Monte Carlo simulation.
  • Block Models (HARK/comparison/models/):
    • Krusell–Smith and Aiyagari block-based definitions with calibrations.
  • Parameter Translation:
    • ParameterTranslator maps unified primitives to method-specific params (HARK/SSJ/Maliar) and back.
  • Examples & Reports (HARK/comparison/examples/):
    • Ready-to-run scripts for KS/Aiyagari and baseline workflows; optional markdown report generation.
  • Tests (HARK/comparison/tests/):
    • Coverage for roadmap implementation, baselines, adapters, metrics, and comparison flows.
  • Docs:
    • HARK/comparison/README.md and SDG_ROADMAP_IMPLEMENTATION.md describe architecture, usage, and roadmap status.

Written by Cursor Bugbot for commit 89de9ef. This will update automatically on new commits. Configure here.

@alanlujan91 alanlujan91 requested a review from Copilot June 26, 2025 14:49
@alanlujan91
Copy link
Member Author

bugbot run

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a unified model comparison framework to HARK, including documentation, core classes, parameter translation, metrics, adapters, tests, and examples.

  • Introduces a README.md with usage instructions and architecture overview.
  • Adds ModelComparison, EconomicMetrics, and ParameterTranslator core modules.
  • Implements adapters for HARK, SSJ, Maliar, External, and Aiyagari solution methods, plus extensive tests and examples.

Reviewed Changes

Copilot reviewed 16 out of 20 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
HARK/comparison/README.md Comprehensive documentation of the comparison infrastructure
HARK/comparison/base.py ModelComparison class orchestrating workflows
HARK/comparison/parameter_translation.py ParameterTranslator for unified-to-method-specific mappings
HARK/comparison/metrics.py EconomicMetrics for error and distribution statistics
HARK/comparison/adapters/ Adapters for each solution method (HARK, SSJ, Maliar, External, Aiyagari)
HARK/comparison/tests/test_comparison.py Tests covering comparison workflow, translators, adapters
HARK/comparison/examples/ Example scripts demonstrating Krusell-Smith and Aiyagari usage
Comments suppressed due to low confidence (1)

HARK/comparison/tests/test_comparison.py:190

  • [nitpick] Consider adding a TestParameterTranslator case for the 'aiyagari/HARK' method to verify that method-specific parameters (e.g., TranShkCount, PermShkCount, aXtraCount) are correctly translated and defaulted.
    def test_maliar_translation(self):

AiyagariAdapter,
)

adapter_map = {
Copy link

Copilot AI Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The adapter factory uses method.split('/')[-1] to set method_type, causing adapters for methods like 'aiyagari/HARK' to have method_type 'HARK' instead of 'aiyagari'. Consider preserving the full identifier or mapping prefixes explicitly so each adapter reports the correct method_type.

Copilot uses AI. Check for mistakes.
AiyagariAdapter,
)

adapter_map = {
Copy link

Copilot AI Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The adapter_map does not include an entry for 'aiyagari/HARK_partial', so calling ModelComparison.solve('aiyagari/HARK_partial') will raise an Unknown method error. Add a mapping for this variant or handle partial-equilibrium variants in the factory.

Copilot uses AI. Check for mistakes.
translated = deepcopy(primitives)

# Add method-specific configuration
translated.update(deepcopy(method_config))
Copy link

Copilot AI Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] After remapping primitives to method-specific names, the original primitive keys remain in 'translated'. Consider removing or isolating original keys for clearer output and to prevent accidental use of unsupported parameters.

Copilot uses AI. Check for mistakes.
Comment on lines +266 to +275
for t in range(n_periods):
try:
# Try different forecast function signatures
try:
k_forecast[t] = forecast_func(k_series[t], state_series[t])
except:
k_forecast[t] = forecast_func(k_series[t])
except:
warnings.warn(f"Forecast function failed at period {t}")
k_forecast[t] = np.nan
Copy link

Copilot AI Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The loop over periods to compute forecasts in den_haan_marcet_statistic may be slow for long series. Consider vectorizing the forecast calls or batching them to leverage NumPy for better performance.

Suggested change
for t in range(n_periods):
try:
# Try different forecast function signatures
try:
k_forecast[t] = forecast_func(k_series[t], state_series[t])
except:
k_forecast[t] = forecast_func(k_series[t])
except:
warnings.warn(f"Forecast function failed at period {t}")
k_forecast[t] = np.nan
try:
# Vectorized forecast computation
if forecast_func.__code__.co_argcount == 2:
k_forecast = np.vectorize(lambda k, s: forecast_func(k, s))(k_series[:-1], state_series[:-1])
else:
k_forecast = np.vectorize(lambda k: forecast_func(k))(k_series[:-1])
except Exception as e:
warnings.warn(f"Forecast function failed: {e}")
k_forecast = np.full(n_periods, np.nan)

Copilot uses AI. Check for mistakes.
cursor[bot]

This comment was marked as outdated.

- Added methods to save and load baseline solutions for efficient model comparison.
- Introduced functionality to compare multiple methods against a single benchmark without re-solving.
- Enhanced documentation to include baseline management and usage examples.
- Added tests to validate baseline save/load functionality and integration with existing models.
@akshayshanker akshayshanker self-assigned this Sep 24, 2025
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is being reviewed by Cursor Bugbot

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

warnings.warn(
"Loaded baseline has different primitives than current model. "
"This may affect comparison validity."
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: NumPy Array Comparison Fails in Dictionary Check

The primitives dictionary comparison at line 190 can fail with a ValueError when NumPy arrays are present. The != operator doesn't handle NumPy arrays element-wise, leading to an array result that cannot be used in a boolean context.

Fix in Cursor Fix in Web

blocks = self._create_ssj_blocks(params)

# Create the model
self.model = create_model(blocks, name="KrusellSmith")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Missing Import Causes Model Creation Error

The SSJAdapter calls create_model on line 66, but the required import from sequence_jacobian on line 46 is commented out. This will result in a NameError when the solve method attempts to create the model.

Additional Locations (1)

Fix in Cursor Fix in Web

@mnwhite mnwhite moved this to In progress in Issues & PRs Jan 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: In progress

Development

Successfully merging this pull request may close these issues.

3 participants