Conversation
There was a problem hiding this comment.
Pull Request Overview
Enhances the ChessSpecificFeatures module to respect model-wide normalization and activation configurations by propagating these settings from the main model config through the PolicyValueNet constructor.
- Adds
normandactivationparameters toChessSpecificFeaturesconstructor with appropriate defaults - Updates
PolicyValueNetto forward configured norm/activation settings to chess-specific layers - Adds comprehensive test coverage verifying the chess feature extractor honors the chosen settings while preserving output shapes
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| azchess/model/resnet.py | Updated ChessSpecificFeatures constructor to accept norm/activation parameters and modified PolicyValueNet to pass these from config |
| tests/test_chess_features.py | Added parametrized test verifying chess features respect norm and activation config settings |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| self.piece_square_tables = piece_square_tables | ||
|
|
||
|
|
||
| activation_cls = nn.SiLU if activation == "silu" else nn.ReLU |
There was a problem hiding this comment.
The activation selection logic only handles 'silu' and defaults to ReLU for all other values. Consider adding explicit validation or support for other common activations to make the behavior more predictable.
| activation_cls = nn.SiLU if activation == "silu" else nn.ReLU | |
| activation_map = { | |
| "relu": nn.ReLU, | |
| "silu": nn.SiLU, | |
| "gelu": nn.GELU, | |
| "tanh": nn.Tanh, | |
| "leaky_relu": nn.LeakyReLU, | |
| "elu": nn.ELU, | |
| } | |
| if activation.lower() not in activation_map: | |
| raise ValueError( | |
| f"Unsupported activation '{activation}'. Supported activations are: {list(activation_map.keys())}" | |
| ) | |
| activation_cls = activation_map[activation.lower()] |
| self.pst_conv = nn.Conv2d(channels, channels, kernel_size=1, bias=False) | ||
| self.pst_norm = _norm(channels) | ||
|
|
||
| self.pst_norm = _norm(channels, norm) |
There was a problem hiding this comment.
The _norm function is called without validating the norm parameter first. Consider adding validation in the constructor to fail fast if an unsupported norm type is provided, rather than letting the error bubble up from _norm.
| self.interaction_conv = nn.Conv2d(channels, channels, kernel_size=3, padding=1, bias=False) | ||
| self.interaction_norm = _norm(channels) | ||
|
|
||
| self.interaction_norm = _norm(channels, norm) |
There was a problem hiding this comment.
The _norm function is called without validating the norm parameter first. Consider adding validation in the constructor to fail fast if an unsupported norm type is provided, rather than letting the error bubble up from _norm.
There was a problem hiding this comment.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| self.pst_norm = _norm(channels) | ||
|
|
||
| self.pst_norm = _norm(channels, norm) | ||
| self.pst_activation = activation_cls(inplace=True) |
There was a problem hiding this comment.
Bug: Conditional Attribute Initialization Causes API Inconsistency
The pst_activation attribute in ChessSpecificFeatures is only initialized when piece_square_tables is True. This conditional setup can cause an AttributeError if piece_square_tables is False and code attempts to access it, creating an inconsistent API compared to interaction_activation.
Summary
ChessSpecificFeaturesto accept norm and activation settings from the model configPolicyValueNetto forward the configured norm/activation to chess-specific layersTesting
https://chatgpt.com/codex/tasks/task_e_68e4127a93c08323906d16112e5e70f5