Skip to content

Add support for multiple loss functions in AADForest#267

Draft
matwey wants to merge 5 commits intomasterfrom
aad_loss_function
Draft

Add support for multiple loss functions in AADForest#267
matwey wants to merge 5 commits intomasterfrom
aad_loss_function

Conversation

@matwey
Copy link
Contributor

@matwey matwey commented Jul 30, 2025

No description provided.

@matwey matwey force-pushed the aad_loss_function branch from bce7fb6 to e619237 Compare July 30, 2025 09:45
@matwey matwey requested a review from Copilot July 30, 2025 09:46

This comment was marked as outdated.

@matwey matwey force-pushed the aad_loss_function branch from e619237 to 2ae0b83 Compare July 30, 2025 09:48
@matwey matwey requested a review from Copilot July 30, 2025 09:48
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for multiple loss functions to the AADForest class, refactoring the code to prepare for extensibility. The key changes include moving the _q_tau method from AADForest to AADEvaluator and adding a loss parameter to the constructor.

  • Moves _q_tau method from AADForest to AADEvaluator for better encapsulation
  • Adds loss parameter to AADForest constructor with validation
  • Updates method calls to use the evaluator's _q_tau method

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/coniferest/aadforest.py Refactors _q_tau method to AADEvaluator, adds loss parameter with validation
tests/test_aadforest.py Updates test to call _q_tau through evaluator instead of forest directly

@@ -231,24 +248,20 @@ def __init__(
else:
raise ValueError(f"map_value is neither a callable nor one of {', '.join(MAP_VALUES.keys())}.")

Copy link

Copilot AI Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The LOSSES list contains only one item, making the validation somewhat redundant. Consider either removing this validation until multiple loss functions are implemented, or add a comment explaining the future extensibility plan.

Suggested change
# Currently, only the "hinge" loss function is supported.
# This list is designed for future extensibility to include additional loss functions.

Copilot uses AI. Check for mistakes.
if loss not in LOSSES:
raise ValueError(f"loss is not one of {', '.join(LOSSES)}.")

self.loss = loss
Copy link

Copilot AI Jul 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loss parameter is stored but never used in the implementation. Consider either implementing the loss function logic or adding a comment explaining how this will be used in future implementations.

Suggested change
self.loss = loss
self.loss = loss
# The `loss` parameter is currently not used in the implementation.
# It is reserved for future extensions where different loss functions
# may be incorporated into the anomaly detection logic.

Copilot uses AI. Check for mistakes.
@codspeed-hq
Copy link

codspeed-hq bot commented Jul 30, 2025

CodSpeed Performance Report

Merging #267 will improve performances by ×18

Comparing aad_loss_function (2b74f34) with master (c78dece)1

🎉 Hooray! pytest-codspeed just leveled up to 4.0.0!

A heads-up, this is a breaking change and it might affect your current performance baseline a bit. But here's the exciting part - it's packed with new, cool features and promises improved result stability 🥳!
Curious about what's new? Visit our releases page to delve into all the awesome details about this new version.

Summary

⚡ 46 improvements
✅ 5 untouched benchmarks

Benchmarks breakdown

Benchmark BASE HEAD Change
test_benchmark_fit_known 266.8 ms 205.8 ms +29.65%
test_benchmark_loss_gradient[1024-1024] 4.4 ms 3.9 ms +13.9%
test_benchmark_loss_gradient[1024-65536] 4.4 ms 3.9 ms +13.85%
test_benchmark_feature_signature[128] 555.7 µs 409.1 µs +35.84%
test_benchmark_feature_signature[2] 563.4 µs 414 µs +36.06%
test_benchmark_fit_sklearn[1024] 5.3 s 4.5 s +17.6%
test_benchmark_fit_sklearn[128] 668.1 ms 568.5 ms +17.51%
test_benchmark_score[1024] 8.5 ms 7 ms +22.72%
test_benchmark_score[1048576] 8.4 s 6.9 s +21.5%
test_benchmark_score_samples[1-128-1024] 9.8 ms 8.3 ms +18.46%
test_benchmark_score_samples[1-128-1048576] 9.7 s 8.3 s +17.46%
test_benchmark_score_samples[1-128-1] 264.8 µs 125.7 µs ×2.1
test_benchmark_score_samples[1-128-32] 614.1 µs 456.3 µs +34.6%
test_benchmark_score_samples[1-256-1024] 18 ms 15.1 ms +19.68%
test_benchmark_score_samples[1-256-1048576] 17.9 s 15 s +19.39%
test_benchmark_score_samples[1-256-1] 276.2 µs 158.2 µs +74.56%
test_benchmark_score_samples[1-256-32] 978.7 µs 773.6 µs +26.51%
test_benchmark_score_samples[1-64-1024] 5.6 ms 4.8 ms +17.86%
test_benchmark_score_samples[1-64-1048576] 5.5 s 4.7 s +15.62%
test_benchmark_score_samples[1-64-1] 223.6 µs 109.7 µs ×2
... ... ... ... ...

ℹ️ Only the first 20 benchmarks are displayed. Go to the app to view all benchmarks.

Footnotes

  1. No successful run was found on master (b9ba72e) during the generation of this report, so c78dece was used instead as the comparison base. There might be some changes unrelated to this pull request in this report.

@matwey matwey force-pushed the aad_loss_function branch from 789beb5 to 830f31a Compare July 31, 2025 11:31
Reduce amount of redundant code by moving loss-function related things to
AADEvaluator.
@matwey matwey force-pushed the aad_loss_function branch from 2a10da0 to 2b74f34 Compare August 6, 2025 08:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments