Skip to content

Sourcery Starbot ⭐ refactored rodrigosnader/deepdow#1

Open
SourceryAI wants to merge 1 commit intorodrigosnader:masterfrom
SourceryAI:master
Open

Sourcery Starbot ⭐ refactored rodrigosnader/deepdow#1
SourceryAI wants to merge 1 commit intorodrigosnader:masterfrom
SourceryAI:master

Conversation

@SourceryAI
Copy link

Thanks for starring sourcery-ai/sourcery ✨ 🌟 ✨

Here's your pull request refactoring your most popular Python repo.

If you want Sourcery to refactor all your Python repos and incoming pull requests install our bot.

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch https://github.com/sourcery-ai-bot/deepdow master
git merge --ff-only FETCH_HEAD
git reset HEAD^

weights = ivols / ivols.sum(dim=1, keepdim=True)

return weights
return ivols / ivols.sum(dim=1, keepdim=True)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function InverseVolatility.__call__ refactored with the following changes:

Comment on lines -220 to +223
if not (len(stats['lookback'].unique()) == 1 and len(stats['model'].unique()) == 1):
if (
len(stats['lookback'].unique()) != 1
or len(stats['model'].unique()) != 1
):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function EarlyStoppingCallback.on_epoch_end refactored with the following changes:

  • Simplify logical expression using De Morgan identities (de-morgan)

Comment on lines -401 to +407
if not (len(stats['lookback'].unique()) == 1 and len(stats['model'].unique()) == 1):
if (
len(stats['lookback'].unique()) != 1
or len(stats['model'].unique()) != 1
):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ModelCheckpointCallback.on_epoch_end refactored with the following changes:

  • Simplify logical expression using De Morgan identities (de-morgan)


else:
df = self.metrics_per_epoch(epoch)
df = self.metrics if epoch is None else self.metrics_per_epoch(epoch)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function History.pretty_print refactored with the following changes:

Comment on lines -175 to +171
if not all([isinstance(x, Loss) for x in metrics.values()]):
if not all(isinstance(x, Loss) for x in metrics.values()):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Run.__init__ refactored with the following changes:

res = torch.stack(w_l, dim=0)

return res
return torch.stack(w_l, dim=0)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function NCO.forward refactored with the following changes:

Comment on lines -478 to +476
cons = [cp.sum(w) == 1,
0. <= w,
w <= max_weight]
cons = [cp.sum(w) == 1, w >= 0., w <= max_weight]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function SparsemaxAllocator.__init__ refactored with the following changes:

corr = covmat / torch.matmul(stds_, stds_.permute(0, 2, 1))

return corr
return covmat / torch.matmul(stds_, stds_.permute(0, 2, 1))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Cov2Corr.forward refactored with the following changes:

Comment on lines -56 to +59
if shrinkage_strategy is not None:
if shrinkage_strategy not in {'diagonal', 'identity', 'scaled_identity'}:
raise ValueError('Unrecognized shrinkage strategy {}'.format(shrinkage_strategy))
if shrinkage_strategy is not None and shrinkage_strategy not in {
'diagonal',
'identity',
'scaled_identity',
}:
raise ValueError('Unrecognized shrinkage strategy {}'.format(shrinkage_strategy))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function CovarianceMatrix.__init__ refactored with the following changes:

Comment on lines -174 to -181
x_warped = nn.functional.grid_sample(x,
return nn.functional.grid_sample(x,
grid,
mode=self.mode,
padding_mode=self.padding_mode,
align_corners=True,
)

return x_warped
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Warp.forward refactored with the following changes:

Comment on lines -237 to -244
x_zoomed = nn.functional.grid_sample(x,
return nn.functional.grid_sample(x,
grid,
mode=self.mode,
padding_mode=self.padding_mode,
align_corners=True,
)

return x_zoomed
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Zoom.forward refactored with the following changes:

weights = self.allocate_layer(x, temperatures)

return weights
return self.allocate_layer(x, temperatures)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function GreatNet.forward refactored with the following changes:

weights_filled = torch.repeat_interleave(weights, n, dim=0)

return weights_filled
return torch.repeat_interleave(weights, n, dim=0)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Net.forward refactored with the following changes:

Comment on lines -484 to +503
assert n_parameters == n_dir * (
(n_channels * hidden_size_a) + (hidden_size_a * hidden_size_a) + 2 * hidden_size_a)
assert n_parameters == (
n_dir
* (
n_channels * hidden_size_a
+ hidden_size_a ** 2
+ 2 * hidden_size_a
)
)


else:
assert n_parameters == n_dir * 4 * (
(n_channels * hidden_size_a) + (hidden_size_a * hidden_size_a) + 2 * hidden_size_a)
assert n_parameters == (
n_dir
* 4
* (
n_channels * hidden_size_a
+ hidden_size_a ** 2
+ 2 * hidden_size_a
)
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function TestRNN.test_n_parameters refactored with the following changes:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant