Skip to content

Conversation

@neelsoumya
Copy link

@neelsoumya neelsoumya commented May 3, 2025

Contributing my code with comments for stable diffusion Issue #8

  1. Adding comments.
  2. Adding more code for loading in Google Colab (using a T4 GPU) on my Cambridge account (work in progress).
  3. Please note that this PR does not solve the issue.
from google.colab import drive
import os

mount='/content/gdrive'

# mount Google Drive
drive.mount(mount)

# Switch to the directory on the Google Drive that you want to use
drive_root = mount + "/My Drive/diffusion-models"
%cd $drive_root

I can run until here (where I get an error):

# run both experts
image_base = base(
    prompt=prompt,
    num_inference_steps=n_steps,
    denoising_end=high_noise_frac,
    output_type="latent",
).images

image_final = refiner(
    prompt=prompt,
    num_inference_steps=n_steps,
    denoising_start=high_noise_frac,
    image=image_base,
).images[0]
```py


Error message is

```py
---------------------------------------------------------------------------
OutOfMemoryError                          Traceback (most recent call last)
<ipython-input-8-410c6a79b4f8> in <cell line: 0>()
      7 ).images
      8 
----> 9 image_final = refiner(
     10     prompt=prompt,
     11     num_inference_steps=n_steps,

17 frames
/usr/local/lib/python3.11/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    547                 self.groups,
    548             )
--> 549         return F.conv2d(
    550             input, weight, bias, self.stride, self.padding, self.dilation, self.groups
    551         )

OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU 0 h
```py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant