Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Aug 20, 2025

This PR adds a new --dropout_during_inference flag to the run_structure_prediction.py script that enables dropout during inference for improved uncertainty estimation. The flag is supported across all available backends: AlphaFold, AlphaFold3, AlphaLink, and UniFold.

Problem

Currently, there's no way to enable dropout during inference in AlphaPulldown, which limits the ability to obtain uncertainty estimates for predictions. Dropout during inference can provide valuable insights into model confidence and prediction reliability.

Solution

Added a new boolean flag --dropout_during_inference that:

  • Defaults to False to maintain backward compatibility
  • When enabled, sets eval_dropout=True in the respective model configurations
  • Is passed through to all backend setup() functions via the existing parameter passing mechanism

Implementation Details

The implementation follows the pseudocode specified in the issue:

from alphafold.model import config
cfg = config.model_config(<model_name>)
cfg.model.global_config.eval_dropout = True 

Changes Made

  1. run_structure_prediction.py: Added flag definition and included it in default_model_flags
  2. alphafold_backend.py: Added parameter to setup() and logic to set model_config.model.global_config.eval_dropout = True
  3. alphafold3_backend.py: Added parameter and logic in make_model_config() to set config.global_config.eval_dropout = True
  4. alphalink_backend.py: Added parameter and logic with fallback handling for different config structures
  5. unifold_backend.py: Added parameter and logic with proper error handling and logging
  6. backend/__init__.py: Updated available backends list to include alphalink

Error Handling

The implementation includes proper error handling with hasattr() checks and warning messages when global_config is not available, ensuring compatibility across different model configurations.

Usage

# Enable dropout during inference for any backend
python alphapulldown/scripts/run_structure_prediction.py \
    --input protein1.fasta \
    --output_directory ./results \
    --data_directory /path/to/weights \
    --features_directory /path/to/features \
    --fold_backend alphafold \
    --dropout_during_inference

Testing

Added comprehensive tests in test/test_dropout_during_inference.py that verify:

  • Flag is properly defined and documented
  • All backend setup() functions accept the parameter
  • Logic is correctly implemented in all backends

The changes are minimal (132 insertions, 2 deletions) and maintain full backward compatibility.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Co-authored-by: DimaMolod <33123184+DimaMolod@users.noreply.github.com>
Copilot AI changed the title [WIP] Add dropout during inference as a new flag for run_structure_prediction.py script. This new flag should be passed to backend.setup for alphafold_backend.py, alphafold3_backend.py (if applicable) and alphalink2_backend.py and set e.g. eval_dropout=True ... Add dropout during inference flag for improved uncertainty estimation across all backends Aug 20, 2025
Copilot AI requested a review from DimaMolod August 20, 2025 13:55
@DimaMolod DimaMolod marked this pull request as ready for review August 20, 2025 13:59
@DimaMolod DimaMolod marked this pull request as draft August 20, 2025 14:42
@DimaMolod DimaMolod closed this Aug 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants