Sourcery Starbot ⭐ refactored rodrigosnader/deepdow#1
Sourcery Starbot ⭐ refactored rodrigosnader/deepdow#1SourceryAI wants to merge 1 commit intorodrigosnader:masterfrom
Conversation
| weights = ivols / ivols.sum(dim=1, keepdim=True) | ||
|
|
||
| return weights | ||
| return ivols / ivols.sum(dim=1, keepdim=True) |
There was a problem hiding this comment.
Function InverseVolatility.__call__ refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| if not (len(stats['lookback'].unique()) == 1 and len(stats['model'].unique()) == 1): | ||
| if ( | ||
| len(stats['lookback'].unique()) != 1 | ||
| or len(stats['model'].unique()) != 1 | ||
| ): |
There was a problem hiding this comment.
Function EarlyStoppingCallback.on_epoch_end refactored with the following changes:
- Simplify logical expression using De Morgan identities (
de-morgan)
| if not (len(stats['lookback'].unique()) == 1 and len(stats['model'].unique()) == 1): | ||
| if ( | ||
| len(stats['lookback'].unique()) != 1 | ||
| or len(stats['model'].unique()) != 1 | ||
| ): |
There was a problem hiding this comment.
Function ModelCheckpointCallback.on_epoch_end refactored with the following changes:
- Simplify logical expression using De Morgan identities (
de-morgan)
|
|
||
| else: | ||
| df = self.metrics_per_epoch(epoch) | ||
| df = self.metrics if epoch is None else self.metrics_per_epoch(epoch) |
There was a problem hiding this comment.
Function History.pretty_print refactored with the following changes:
- Replace if statement with if expression (
assign-if-exp)
| if not all([isinstance(x, Loss) for x in metrics.values()]): | ||
| if not all(isinstance(x, Loss) for x in metrics.values()): |
There was a problem hiding this comment.
Function Run.__init__ refactored with the following changes:
- Replace unneeded comprehension with generator (
comprehension-to-generator)
| res = torch.stack(w_l, dim=0) | ||
|
|
||
| return res | ||
| return torch.stack(w_l, dim=0) |
There was a problem hiding this comment.
Function NCO.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| cons = [cp.sum(w) == 1, | ||
| 0. <= w, | ||
| w <= max_weight] | ||
| cons = [cp.sum(w) == 1, w >= 0., w <= max_weight] |
There was a problem hiding this comment.
Function SparsemaxAllocator.__init__ refactored with the following changes:
- Ensure constant in comparison is on the right (
flip-comparison)
| corr = covmat / torch.matmul(stds_, stds_.permute(0, 2, 1)) | ||
|
|
||
| return corr | ||
| return covmat / torch.matmul(stds_, stds_.permute(0, 2, 1)) |
There was a problem hiding this comment.
Function Cov2Corr.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| if shrinkage_strategy is not None: | ||
| if shrinkage_strategy not in {'diagonal', 'identity', 'scaled_identity'}: | ||
| raise ValueError('Unrecognized shrinkage strategy {}'.format(shrinkage_strategy)) | ||
| if shrinkage_strategy is not None and shrinkage_strategy not in { | ||
| 'diagonal', | ||
| 'identity', | ||
| 'scaled_identity', | ||
| }: | ||
| raise ValueError('Unrecognized shrinkage strategy {}'.format(shrinkage_strategy)) |
There was a problem hiding this comment.
Function CovarianceMatrix.__init__ refactored with the following changes:
- Merge nested if conditions (
merge-nested-ifs)
| x_warped = nn.functional.grid_sample(x, | ||
| return nn.functional.grid_sample(x, | ||
| grid, | ||
| mode=self.mode, | ||
| padding_mode=self.padding_mode, | ||
| align_corners=True, | ||
| ) | ||
|
|
||
| return x_warped |
There was a problem hiding this comment.
Function Warp.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| x_zoomed = nn.functional.grid_sample(x, | ||
| return nn.functional.grid_sample(x, | ||
| grid, | ||
| mode=self.mode, | ||
| padding_mode=self.padding_mode, | ||
| align_corners=True, | ||
| ) | ||
|
|
||
| return x_zoomed |
There was a problem hiding this comment.
Function Zoom.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| weights = self.allocate_layer(x, temperatures) | ||
|
|
||
| return weights | ||
| return self.allocate_layer(x, temperatures) |
There was a problem hiding this comment.
Function GreatNet.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| weights_filled = torch.repeat_interleave(weights, n, dim=0) | ||
|
|
||
| return weights_filled | ||
| return torch.repeat_interleave(weights, n, dim=0) |
There was a problem hiding this comment.
Function Net.forward refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| assert n_parameters == n_dir * ( | ||
| (n_channels * hidden_size_a) + (hidden_size_a * hidden_size_a) + 2 * hidden_size_a) | ||
| assert n_parameters == ( | ||
| n_dir | ||
| * ( | ||
| n_channels * hidden_size_a | ||
| + hidden_size_a ** 2 | ||
| + 2 * hidden_size_a | ||
| ) | ||
| ) | ||
|
|
||
|
|
||
| else: | ||
| assert n_parameters == n_dir * 4 * ( | ||
| (n_channels * hidden_size_a) + (hidden_size_a * hidden_size_a) + 2 * hidden_size_a) | ||
| assert n_parameters == ( | ||
| n_dir | ||
| * 4 | ||
| * ( | ||
| n_channels * hidden_size_a | ||
| + hidden_size_a ** 2 | ||
| + 2 * hidden_size_a | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Function TestRNN.test_n_parameters refactored with the following changes:
- Replace x * x with x ** 2 (
square-identity)
Thanks for starring sourcery-ai/sourcery ✨ 🌟 ✨
Here's your pull request refactoring your most popular Python repo.
If you want Sourcery to refactor all your Python repos and incoming pull requests install our bot.
Review changes via command line
To manually merge these changes, make sure you're on the
masterbranch, then run: