
In demo_panoseg.py, images have been resized to 512, but weight and height are still the size of the original image. So if the original image is very large, the memory will explode directly.
How does model.forward process images internally? Since it has been resized, why doesn't it perform inference on the size of 512? Why do we need to input the size of the original image?