Skip to content

部署首次运行LHM-MINI报错 assert torch_missing == missing, f"{torch_missing} != {missing}" #155

@daiya1235

Description

@daiya1235

--- Querying model: LHM-MINI ---
Step 1: Checking local ModelScope cache...
Checking cache for ModelScope model: Damo_XR_Lab/LHM-MINI
ModelScope model base path /root/LHM-main/pretrained_models/Damo_XR_Lab/LHM-MINI not found in local cache.
Info: Not found in local ModelScope cache.
Step 2: Checking local Hugging Face cache...
Checking cache for Hugging Face model: 3DAIGC/LHM-MINI
Hugging Face model 3DAIGC/LHM-MINI not found in local cache /root/LHM-main/pretrained_models/huggingface.
Info: Not found in local Hugging Face cache.
Info: Model not found in local caches. Attempting downloads...
Step 3: Attempting download from Hugging Face...
Querying/Downloading Hugging Face model: 3DAIGC/LHM-MINI
model.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.78G/2.78G [02:40<00:00, 2.33MB/s]
Fetching 4 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [02:42<00:00, 40.73s/it]
Hugging Face model path obtained: /root/LHM-main/pretrained_models/huggingface/models--3DAIGC--LHM-MINI/snapshots/f0d32df3703a33027e58948b1d74df9b68956c74
Success: Downloaded from Hugging Face: /root/LHM-main/pretrained_models/huggingface/models--3DAIGC--LHM-MINI/snapshots/f0d32df3703a33027e58948b1d74df9b68956c74/
/root/LHM-main/LHM/models/encoders/dinov2/layers/swiglu_ffn.py:43: UserWarning: xFormers is available (SwiGLU)
warnings.warn("xFormers is available (SwiGLU)")
/root/LHM-main/LHM/models/encoders/dinov2/layers/attention.py:27: UserWarning: xFormers is available (Attention)
warnings.warn("xFormers is available (Attention)")
/root/LHM-main/LHM/models/encoders/dinov2/layers/block.py:39: UserWarning: xFormers is available (Block)
warnings.warn("xFormers is available (Block)")
==========skip_decoder:True
/root/miniconda3/envs/LHM/lib/python3.10/site-packages/torch/amp/autocast_mode.py:267: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16, torch.float16 currently.
warnings.warn(error_message)
load voxel_grid voxel_192.pth

/root/miniconda3/envs/LHM/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/root/miniconda3/envs/LHM/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None.
warnings.warn(msg)
======== Freezing Sapiens Model ========
Loading weights from local directory
Traceback (most recent call last):
File "/root/LHM-main/app.py", line 786, in
launch_gradio_app()
File "/root/LHM-main/app.py", line 778, in launch_gradio_app
lhm = _build_model(cfg)
File "/root/LHM-main/app.py", line 281, in _build_model
model = hf_model_cls.from_pretrained(cfg.model_name)
File "/root/miniconda3/envs/LHM/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/LHM/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py", line 511, in from_pretrained
instance = cls._from_pretrained(
File "/root/miniconda3/envs/LHM/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py", line 731, in _from_pretrained
return cls._load_as_safetensor(model, model_file, map_location, strict)
File "/root/miniconda3/envs/LHM/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py", line 769, in _load_as_safetensor
load_model_as_safetensor(model, model_file, strict=strict) # type: ignore [arg-type]
File "/root/miniconda3/envs/LHM/lib/python3.10/site-packages/safetensors/torch.py", line 284, in load_model
assert torch_missing == missing, f"{torch_missing} != {missing}"
AssertionError: {'fine_encoder.model.backbone.layers.6.ffn.layers.1.bias', 'fine_encoder.model.backbone.layers.9.ln1.bias',..................

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions