Conversation
There was a problem hiding this comment.
Pull request overview
This PR removes default parameter values from loss function constructors and model configuration handling, making all parameters explicitly required through configuration dictionaries. Additionally, it upgrades the Darts library version, adds support for the NHiTS model architecture, enhances prediction processing with a small epsilon clipping value, and implements automatic dataloader worker configuration.
Changes:
- Removed default parameter values from all loss function constructors, requiring explicit configuration
- Replaced
.get()calls with direct dictionary access in model catalog configuration handling - Added NHiTS model support with comprehensive configuration parameters
- Updated prediction clipping from 0 to epsilon (1e-8) to prevent zero values
- Added auto-detection of num_workers for dataloader with persistent workers support
- Upgraded Darts dependency from 0.35.0 to 0.40.0
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| views_r2darts2/utils/loss.py | Removed default values from all loss function __init__ methods |
| views_r2darts2/model/forecaster.py | Changed prediction clipping from 0 to epsilon; added dataloader worker auto-detection |
| views_r2darts2/model/catalog.py | Replaced .get() with direct dict access; added NHiTS model support; removed hardcoded defaults |
| views_r2darts2/manager/model.py | Removed default values from .get() calls for num_samples and mc_dropout |
| tests/test_loss.py | Updated test instantiations to provide required parameters explicitly |
| tests/test_forecaster.py | Updated assertions to account for epsilon clipping instead of zero |
| tests/test_catalog.py | Extended test fixtures with comprehensive configuration parameters |
| pyproject.toml | Updated Darts version from 0.35.0 to 0.40.0; removed deprecated dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| sequence_number, | ||
| max(self.configs["steps"]), | ||
| num_samples=self.configs.get("num_samples", 1), | ||
| num_samples=self.configs.get("num_samples"), |
There was a problem hiding this comment.
Removing the default value from .get() means this will return None if 'num_samples' is not in the config, which may cause runtime errors in the predict method. Either provide a default value or ensure the configuration always contains this key.
| num_samples=self.configs.get("num_samples"), | |
| num_samples=self.configs.get("num_samples", 1), |
| num_samples=self.configs.get("num_samples"), | ||
| n_jobs=self.configs.get("n_jobs", 1), | ||
| mc_dropout=self.configs.get("mc_dropout", False), | ||
| mc_dropout=self.configs.get("mc_dropout"), |
There was a problem hiding this comment.
Removing the default value from .get() means this will return None if 'mc_dropout' is not in the config, which may cause runtime errors in the predict method. Either provide a default value or ensure the configuration always contains this key.
| mc_dropout=self.configs.get("mc_dropout"), | |
| mc_dropout=self.configs.get("mc_dropout", False), |
There was a problem hiding this comment.
Not sure how far we need to go by removing .get. Surely we need to remove it for architecture-specific parameters and loss-related parameters. But for parameters, like output_chunk_shift, random_state , that in most cases we use the default values, is it necessary? Right now it's fine for the purpose of hidden failure, but we should discuss them in the future.
xiaolong0728
left a comment
There was a problem hiding this comment.
Probably we need to discuss which parameters are really necessary to set and which are not. But not important rn.
No description provided.