Skip to content

Add a new algorithm <DML> and update requirements#19

Open
SepineTam wants to merge 1 commit intoFromCSUZhou:mainfrom
SepineTam:main
Open

Add a new algorithm <DML> and update requirements#19
SepineTam wants to merge 1 commit intoFromCSUZhou:mainfrom
SepineTam:main

Conversation

@SepineTam
Copy link

@SepineTam SepineTam commented Jun 19, 2025

Add a new algorithm based on EconML.

Pull Request Checklist

  • Description: Briefly describe the changes in this pull request.
  • Changelog: Ensure a changelog entry following the format of Keep a Changelog is added at the bottom of the PR description.
  • Documentation: Have you updated relevant documentation?
  • Dependencies: Are there any new dependencies? Have you updated the dependency versions in the documentation?

Description

[Insert a brief description of the changes made in this pull request]
For there, add a new algorithm named "Linear_Double_Machine_Learning" with EconML to the file agent/metagpt/tools/libs/econometric_algorithm.py.

Changelog Entry

Added

  • [List any new features or additions]
    Add a new algorithm to agent/metagpt/tools/libs/econometric_algorithm.py.

And update the requirements.

Fixed

  • [List any fixes or corrections]

Changed

  • [List any changes or updates]

Removed

  • [List any removed features or files]

pytest

"""
Unit tests for `Linear_Double_Machine_Learning`.

Conventions:
* pytest fixtures for data generation
* Assertions (no print statements)
* English docstrings and variable names
"""

from __future__ import annotations

import numpy as np
import pandas as pd
import pytest
from econml.dml import LinearDML
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LogisticRegression

from metagpt.tools.libs.econometric_algorithm import Linear_Double_Machine_Learning


# -----------------------------------------------------------------------------#
# Helpers & fixtures
# -----------------------------------------------------------------------------#
def _make_binary_data(n_samples: int = 1_000, seed: int = 42):
    """Create synthetic data with a known binary treatment effect."""
    rng = np.random.default_rng(seed)

    x1 = rng.normal(size=n_samples)
    x2 = rng.normal(size=n_samples)
    x3 = rng.binomial(1, 0.5, size=n_samples)

    propensity = 1 / (1 + np.exp(-(0.5 * x1 + 0.3 * x2 + 0.2 * x3)))
    t = rng.binomial(1, propensity)

    true_ate = 2.0
    y = (
        1
        + 0.5 * x1
        + 0.3 * x2
        + 0.2 * x3
        + true_ate * t
        + rng.normal(size=n_samples)
    )

    covars = pd.DataFrame({"X1": x1, "X2": x2, "X3": x3})
    return pd.Series(y), pd.Series(t), covars, true_ate


@pytest.fixture(scope="module")
def binary_data():
    return _make_binary_data(n_samples=500, seed=42)


# -----------------------------------------------------------------------------#
# Tests
# -----------------------------------------------------------------------------#
def test_basic_ate_estimation(binary_data):
    """The estimated ATE should be close to the true ATE."""
    y, t, x, true_ate = binary_data
    est_ate = Linear_Double_Machine_Learning(
        dependent_variable=y,
        treatment_variable=t,
        covariate_variables=x,
        random_state=42,
    )
    assert est_ate == pytest.approx(true_ate, rel=0.1)


def test_model_object_return(binary_data):
    """Requesting the final model should return an EconML `LinearDML` instance."""
    y, t, x, _ = binary_data
    model = Linear_Double_Machine_Learning(
        dependent_variable=y,
        treatment_variable=t,
        covariate_variables=x,
        target_type="final_model",
        random_state=42,
    )
    assert isinstance(model, LinearDML)


def test_custom_nuisance_models(binary_data):
    """Custom nuisance models should run without error and yield a reasonable ATE."""
    y, t, x, true_ate = binary_data
    est_ate = Linear_Double_Machine_Learning(
        dependent_variable=y,
        treatment_variable=t,
        covariate_variables=x,
        model_y=RandomForestRegressor(n_estimators=50, random_state=42),
        model_t=LogisticRegression(random_state=42, max_iter=1_000),
        random_state=42,
    )
    assert est_ate == pytest.approx(true_ate, rel=0.15)


def test_continuous_treatment():
    """The estimator should handle a continuous treatment variable."""
    rng = np.random.default_rng(42)
    n = 500
    x1, x2 = rng.normal(size=(2, n))
    t_cont = rng.normal(0.5 * x1 + 0.3 * x2, 1.0)
    true_coef = 1.5
    y_cont = 1 + 0.5 * x1 + 0.3 * x2 + true_coef * t_cont + rng.normal(size=n)
    x_df = pd.DataFrame({"X1": x1, "X2": x2})

    est_ate = Linear_Double_Machine_Learning(
        dependent_variable=pd.Series(y_cont),
        treatment_variable=pd.Series(t_cont),
        covariate_variables=x_df,
        random_state=42,
    )
    assert est_ate == pytest.approx(true_coef, rel=0.2)


def test_no_covariates():
    """The function should still work when no covariates are supplied."""
    rng = np.random.default_rng(42)
    n = 500
    t = rng.binomial(1, 0.5, size=n)
    true_ate = 3.0
    y = 2 + true_ate * t + rng.normal(size=n)

    est_ate = Linear_Double_Machine_Learning(
        dependent_variable=pd.Series(y),
        treatment_variable=pd.Series(t),
        covariate_variables=None,
        random_state=42,
    )
    assert est_ate == pytest.approx(true_ate, rel=0.25)


@pytest.mark.parametrize("n_splits", [2, 3, 5])
def test_cv_splits(binary_data, n_splits):
    """ATE estimates should vary only modestly across different CV splits."""
    y, t, x, true_ate = binary_data
    est_ate = Linear_Double_Machine_Learning(
        dependent_variable=y,
        treatment_variable=t,
        covariate_variables=x,
        n_splits=n_splits,
        random_state=42,
    )
    assert est_ate == pytest.approx(true_ate, rel=0.15)

Summary by CodeRabbit

  • New Features
    • Introduced a new econometric analysis tool for estimating Average Treatment Effect (ATE) using double machine learning.
  • Chores
    • Added a new package dependency to support advanced econometric methods.

Add a new algorithm <DML> based on EconML.
@coderabbitai
Copy link

coderabbitai bot commented Jun 19, 2025

Walkthrough

A new econometric tool function, Linear_Double_Machine_Learning, has been added to estimate the Average Treatment Effect (ATE) using the EconML library's LinearDML estimator. The function is registered as an "econometric algorithm". Additionally, the econml package has been added as a dependency in requirements.txt.

Changes

File(s) Change Summary
agent/metagpt/tools/libs/econometric_algorithm.py Added Linear_Double_Machine_Learning function using EconML's LinearDML estimator for ATE estimation.
requirements.txt Added econml==0.15.1 as a new dependency.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Linear_Double_Machine_Learning
    participant EconML_LinearDML

    User->>Linear_Double_Machine_Learning: Provide variables and parameters
    Linear_Double_Machine_Learning->>Linear_Double_Machine_Learning: Prepare data and set defaults
    Linear_Double_Machine_Learning->>EconML_LinearDML: Initialize and fit estimator
    EconML_LinearDML-->>Linear_Double_Machine_Learning: Return fitted model
    Linear_Double_Machine_Learning->>User: Return ATE or estimator based on target_type
Loading

Poem

In a field where data and numbers dwell,
A new tool emerges, with stories to tell.
Double machine learning, with ATE in sight,
Powered by EconML, it gets the math right.
Requirements updated, the toolkit renewed—
The rabbit hops onward, with stats pursued!
🐇📊✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
agent/metagpt/tools/libs/econometric_algorithm.py (1)

3-3: Remove the hardcoded absolute path.

This hardcoded path is environment-specific and will cause issues in other environments or deployments.

-sys.path.append("/home/tianyang/ChatInterpreter/ML_Assistant")
🧹 Nitpick comments (5)
agent/metagpt/tools/libs/econometric_algorithm.py (5)

14-14: Remove unused imports.

Static analysis correctly identifies that LogisticRegression and Lasso are imported but never used in the code.

-from sklearn.linear_model import LogisticRegression, Lasso

1257-1264: Consider reducing the number of function parameters.

The function has 8 parameters, which exceeds typical guidelines and may impact readability. Consider grouping related parameters into a configuration object or using kwargs for optional parameters.

Example refactor approach:

@register_tool(tags=["econometric algorithm"])
def Linear_Double_Machine_Learning(dependent_variable,
                                   treatment_variable, 
                                   covariate_variables=None,
                                   **kwargs):
    # Extract parameters with defaults
    model_y = kwargs.get('model_y', None)
    model_t = kwargs.get('model_t', None)
    n_splits = kwargs.get('n_splits', 3)
    random_state = kwargs.get('random_state', 0)
    target_type = kwargs.get('target_type', "ATE")

1294-1297: Improve default model instantiation.

Consider using a consistent approach for default model creation and ensure proper isolation between function calls.

-    if model_y is None:
-        model_y = RandomForestRegressor(n_estimators=100, random_state=random_state)
-    if model_t is None:
-        model_t = RandomForestRegressor(n_estimators=100, random_state=random_state)
+    if model_y is None:
+        model_y = RandomForestRegressor(n_estimators=100, random_state=random_state)
+    if model_t is None:
+        # Use different random state for treatment model to avoid correlation
+        model_t = RandomForestRegressor(n_estimators=100, random_state=random_state + 1)

1299-1299: Enhance binary treatment detection logic.

The current logic correctly identifies binary treatments, but consider adding more robust validation and clearer variable naming.

-    discrete_treat = set(treatment_variable.dropna().unique()) <= {0, 1}
+    # Check if treatment is binary (contains only 0 and 1 values)
+    unique_values = set(treatment_variable.dropna().unique())
+    discrete_treat = unique_values <= {0, 1} and len(unique_values) == 2

1313-1316: Remove unnecessary else clause.

The else clause after a return statement is redundant and can be simplified.

     if target_type == "final_model":
         return estimator
-    else:
-        return ate
+    return ate
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f554cd5 and 7992042.

📒 Files selected for processing (2)
  • agent/metagpt/tools/libs/econometric_algorithm.py (2 hunks)
  • requirements.txt (1 hunks)
🧰 Additional context used
🪛 Ruff (0.11.9)
agent/metagpt/tools/libs/econometric_algorithm.py

14-14: sklearn.linear_model.LogisticRegression imported but unused

Remove unused import

(F401)


14-14: sklearn.linear_model.Lasso imported but unused

Remove unused import

(F401)

🪛 Pylint (3.3.7)
agent/metagpt/tools/libs/econometric_algorithm.py

[refactor] 1257-1257: Too many arguments (8/5)

(R0913)


[refactor] 1257-1257: Too many positional arguments (8/5)

(R0917)


[refactor] 1313-1316: Unnecessary "else" after "return", remove the "else" and de-indent the code inside it

(R1705)

🔇 Additional comments (2)
requirements.txt (1)

92-92: Verify the EconML package version and security status.

Please ensure that version 0.15.1 is current and free from known vulnerabilities.

#!/bin/bash
# Description: Check EconML package versions and security advisories

# Check PyPI for latest version
curl -s https://pypi.org/pypi/econml/json | jq '.info.version'

# Check for security advisories
gh api graphql -f query='
{
  securityVulnerabilities(first: 5, ecosystem: PIP, package: "econml") {
    nodes {
      advisory {
        summary
        severity
        publishedAt
      }
      vulnerableVersionRange
      firstPatchedVersion {
        identifier
      }
    }
  }
}'
agent/metagpt/tools/libs/econometric_algorithm.py (1)

1257-1316: LGTM! Well-implemented econometric algorithm.

The Linear_Double_Machine_Learning function is well-structured and correctly implements the EconML LinearDML estimator. The function properly:

  • Handles input validation and type conversion
  • Provides sensible defaults for ML models
  • Correctly detects binary vs continuous treatments
  • Follows the existing pattern of other econometric functions in the file
  • Includes appropriate documentation

The implementation aligns well with the existing codebase patterns and provides a valuable addition to the econometric toolkit.

@SepineTam
Copy link
Author

Hi @FromCSUZhou , I noticed you've been committing code daily on GitHub, it looks nice! This PR has been open for almost 2 months now. Not sure if it got overlooked due to your busy schedule or if you have other plans for this project. Hope you can take some time to look at it. If you could give some feedback that would be great, even if you say it should be put in the trash bin, at least I know what I should do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant