Skip to content
This repository was archived by the owner on Nov 1, 2024. It is now read-only.
This repository was archived by the owner on Nov 1, 2024. It is now read-only.

demo out of memory on vocabulary 'lvis-paco' #15

@irimsky

Description

@irimsky

I ran the demo on vocabulary 'lvis-paco' with the command:

python demo/demo.py --config-file configs/joint_in/swinbase_cascade_lvis_paco_pascalpart_partimagenet_inparsed.yaml
--input input1.jpg input2.jpg input3.jpg
--output output_image
--vocabulary lvis_paco
--confidence-threshold 0.7
--opts MODEL.WEIGHTS models/swinbase_cascade_lvis_paco_pascalpart_partimagenet_inparsed.pth VIS.BOX False

then it printed 'Killed' and exited. I switched it to gpu, then it came out with the error:

Traceback (most recent call last):
File "/workspace/VLPart/demo/demo.py", line 116, in
demo = VisualizationDemo(cfg, args).to("cuda:0")
File "/workspace/VLPart/demo/predictor.py", line 124, in init
classifier = get_clip_embeddings(self.metadata.thing_classes)
File "/workspace/VLPart/demo/predictor.py", line 30, in get_clip_embeddings
emb = text_encoder(texts).detach().permute(1, 0).contiguous()
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/VLPart/./vlpart/modeling/text_encoder/text_encoder.py", line 166, in forward
features = self.encode_text(text) # B x D
File "/workspace/VLPart/./vlpart/modeling/text_encoder/text_encoder.py", line 154, in encode_text
x = self.transformer(x)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/VLPart/./vlpart/modeling/text_encoder/text_encoder.py", line 61, in forward
return self.resblocks(x)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/VLPart/./vlpart/modeling/text_encoder/text_encoder.py", line 46, in forward
x = x + self.attention(self.ln_1(x))
File "/workspace/VLPart/./vlpart/modeling/text_encoder/text_encoder.py", line 43, in attention
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 1205, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/functional.py", line 5224, in multi_head_attention_forward
q, k, v = _in_projection_packed(query, key, value, in_proj_weight, in_proj_bias)
File "/root/miniconda3/envs/3s/lib/python3.10/site-packages/torch/nn/functional.py", line 4767, in _in_projection_packed
proj = proj.unflatten(-1, (3, E)).unsqueeze(0).transpose(0, -2).squeeze(-2).contiguous()
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 750.00 MiB (GPU 0; 31.74 GiB total capacity; 29.76 GiB already allocated; 724.12 MiB free; 30.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

My GPU has 40GB memory. I want to know on what condition I can run the demo on vocabulary 'lvis-paco'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions