Skip to content

limit crop_to_factor#5

Draft
abred wants to merge 3 commits intofunkelab:masterfrom
Kainmueller-Lab:dev_te
Draft

limit crop_to_factor#5
abred wants to merge 3 commits intofunkelab:masterfrom
Kainmueller-Lab:dev_te

Conversation

@abred
Copy link
Copy Markdown

@abred abred commented Dec 16, 2020

To avoid stitching artifacts it is sufficient to only crop_to_factor on the last/highest level.
It is not necessary to crop during training.
However, during training the output size has to be strictly larger than crop_factor (prod(downsample_factors))
(I don't have a pytorch setup at hand right now)

to ensure translation equivariance it is sufficient to crop on the
last/highest level
to avoid stitching artifacts it is not necessary to crop during training
To avoid tile-and-stitch inconsistencies, the output size during
training has to be strictly larger than prod(downsample_factors)
@abred abred marked this pull request as draft December 16, 2020 22:42
@abred
Copy link
Copy Markdown
Author

abred commented Dec 17, 2020

ok, cropping only on the last level does not seem to be the best way, but cropping at every level is neither (as the final output size is then smaller than necessary in a number of cases)
One way seems to be to compute, at the bottleneck, the 'naive' max output size (without cropping) and the max 'correct' output size (largest multiple of prod(downsample_factors) smaller or equal). And then check at each upsampling level, if the difference between those two divided by product of remaining downsample_factors is >= 1. If yes, crop that and update the difference value (diff = diff - (diff//prod)*prod)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant