-
Notifications
You must be signed in to change notification settings - Fork 10
Open
Description
Thanks for this notebook @rkdan ! This is great!
In introduction_to_stablediffusion.ipynb running on Google Colab with an instance of a T4 GPU, I am running out of memory.
My code is here:
I can run until here (where I get an error):
# run both experts
image_base = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image_final = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image_base,
).images[0]Error message is
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
<ipython-input-8-410c6a79b4f8> in <cell line: 0>()
7 ).images
8
----> 9 image_final = refiner(
10 prompt=prompt,
11 num_inference_steps=n_steps,
17 frames
/usr/local/lib/python3.11/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
547 self.groups,
548 )
--> 549 return F.conv2d(
550 input, weight, bias, self.stride, self.padding, self.dilation, self.groups
551 )
OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU 0 has a total capacity of 14.74 GiB of which 306.12 MiB is free. Process 6006 has 14.44 GiB memory in use. Of the allocated memory 14.17 GiB is allocated by PyTorch, and 145.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)My code is here:
Metadata
Metadata
Assignees
Labels
No labels