Skip to content

Conversation

@Rive-001
Copy link
Collaborator

No description provided.

@Rive-001 Rive-001 self-assigned this Dec 19, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @Rive-001, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the cell-eval tool by enabling users to define a custom False Discovery Rate (FDR) threshold. This provides greater control over the statistical significance criteria used in differential expression (DE) analyses, ensuring that evaluation metrics for single-cell perturbation predictions can be tailored to specific analytical needs. The change involves adding a new CLI argument, integrating the threshold into the MetricsEvaluator, and updating relevant metric computation functions to respect the user-defined value.

Highlights

  • User-Defined FDR Threshold: Introduced a new command-line argument --fdr-threshold to allow users to specify a custom False Discovery Rate (FDR) threshold for differential expression (DE) significance calculations. The default value is 0.05.
  • Centralized FDR Application: The specified FDR threshold is now consistently applied to a comprehensive list of DE metrics, including various Spearman, direction match, gene recall, count, PR AUC, ROC AUC, overlap, and precision metrics. This is managed through a new internal helper function _build_de_metric_configs.
  • Configurable AUC Calculations: The compute_pr_auc, compute_roc_auc, and compute_generic_auc functions in _de.py have been updated to accept and utilize the fdr_threshold parameter, replacing a previously hardcoded value.
  • Version Update: The project version in pyproject.toml has been incremented from 0.6.6 to 0.6.7.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a user-definable FDR threshold for DE metrics, which is a great feature. The implementation is mostly solid, adding the --fdr-threshold CLI argument and propagating it to the relevant metrics.

My review includes two main suggestions for improvement:

  1. Removing duplicated logic in the CLI script (_run.py) to improve maintainability. The MetricsEvaluator is already equipped to handle the FDR threshold.
  2. Refactoring the hardcoded list of metrics in _evaluator.py to be dynamically generated. This will make the system more robust and easier to extend with new metrics in the future.

Overall, these are good changes that enhance the flexibility of the evaluation tool.

Comment on lines +151 to +173
# Add fdr_threshold to all DE metrics that accept it
de_metrics_with_fdr = [
"de_spearman_sig",
"de_direction_match",
"de_spearman_lfc_sig",
"de_sig_genes_recall",
"de_nsig_counts",
"pr_auc",
"roc_auc",
# overlap/precision metrics
"overlap_at_N",
"overlap_at_50",
"overlap_at_100",
"overlap_at_200",
"overlap_at_500",
"precision_at_N",
"precision_at_50",
"precision_at_100",
"precision_at_200",
"precision_at_500",
]
for metric_name in de_metrics_with_fdr:
metric_kwargs.setdefault(metric_name, {})["fdr_threshold"] = args.fdr_threshold

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block duplicates logic for setting the fdr_threshold that is now handled within the MetricsEvaluator class (in _build_de_metric_configs). To avoid code duplication and improve maintainability, this block should be removed. The fdr_threshold from the command-line arguments should be passed directly to the MetricsEvaluator constructor instead.

After removing this, you'll need to update the MetricsEvaluator instantiations in this file to include fdr_threshold=args.fdr_threshold. I cannot suggest this change directly as it is outside the diff.

Comment on lines +167 to +189
def _build_de_metric_configs(fdr_threshold: float) -> dict[str, dict[str, Any]]:
"""Build metric configs with fdr_threshold for all DE metrics that accept it."""
de_metrics_with_fdr = [
"de_spearman_sig",
"de_direction_match",
"de_spearman_lfc_sig",
"de_sig_genes_recall",
"de_nsig_counts",
"pr_auc",
"roc_auc",
# overlap/precision metrics
"overlap_at_N",
"overlap_at_50",
"overlap_at_100",
"overlap_at_200",
"overlap_at_500",
"precision_at_N",
"precision_at_50",
"precision_at_100",
"precision_at_200",
"precision_at_500",
]
return {metric: {"fdr_threshold": fdr_threshold} for metric in de_metrics_with_fdr}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Instead of hardcoding the list of metrics that accept fdr_threshold, you can generate it dynamically by inspecting the signatures of the registered DE metrics. This would make the code more robust and easier to maintain, as you wouldn't need to update this list manually when adding or modifying metrics.

Here's a suggested implementation that uses the inspect module. You'll also need to add the following imports at the top of the file:

import inspect
from .metrics import metrics_registry
from ._types import MetricType
def _build_de_metric_configs(fdr_threshold: float) -> dict[str, dict[str, Any]]:
    """Build metric configs with fdr_threshold for all DE metrics that accept it."""
    de_metrics_with_fdr = []
    for metric_name in metrics_registry.list_metrics(MetricType.DE):
        metric_info = metrics_registry.get_metric(metric_name)
        func_to_inspect = metric_info.func
        if metric_info.is_class:
            func_to_inspect = func_to_inspect.__init__

        sig = inspect.signature(func_to_inspect)
        if "fdr_threshold" in sig.parameters:
            de_metrics_with_fdr.append(metric_name)

    return {metric: {"fdr_threshold": fdr_threshold} for metric in de_metrics_with_fdr}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants