-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
Traceback (most recent call last):
File "train.py", line 54, in <module>
model.optimize_parameters() # calculate loss functions, get gradients, update network weights
File "/home/lthpc/forsda/sda1/wrj/StillGAN-main/models/cycle_gan_model.py", line 223, in optimize_parameters
self.backward_G() # calculate gradients for G_A and G_B
File "/home/lthpc/forsda/sda1/wrj/StillGAN-main/models/cycle_gan_model.py", line 183, in backward_G
self.idt_A = self.netG_A(self.real_B)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/lthpc/forsda/sda1/wrj/StillGAN-main/models/networks.py", line 615, in forward
d2 = self.Up_conv2(d2) # [B, 2 * ngf, H / 2, W / 2]
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/lthpc/forsda/sda1/wrj/StillGAN-main/models/networks.py", line 490, in forward
out = self.conv(x)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/home/lthpc/.local/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 10.76 GiB total capacity; 9.88 GiB already allocated; 5.44 MiB free; 9.92 GiB reserved in total by PyTorch)
Metadata
Metadata
Assignees
Labels
No labels