Skip to content

Not supported architectures GroveMoeForCausalLM #2

@1661691223

Description

@1661691223

I used the sglang official image sglang:v0.4.6.post5-cu124 as the base image build, but couldn't start GroveMoe-Inst. Can give me some advice? Looking forward to receiving a reply

[2025-09-11 11:41:54 TP1] Scheduler hit an exception: Traceback (most recent call last):
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2297, in run_scheduler_process
    scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 277, in __init__
    self.tp_worker = TpWorkerClass(
  File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 64, in __init__
    self.worker = TpModelWorker(
  File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 78, in __init__
    self.model_runner = ModelRunner(
  File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 231, in __init__
    self.initialize(min_per_gpu_memory)
  File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 246, in initialize
    compute_initial_expert_location_metadata(server_args, self.model_config)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/expert_location.py", line 367, in compute_initial_expert_location_metadata
    return ExpertLocationMetadata.init_trivial(server_args, model_config)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/expert_location.py", line 84, in init_trivial
    common = ExpertLocationMetadata._init_common(server_args, model_config)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/expert_location.py", line 162, in _init_common
    ModelConfigForExpertLocation.from_model_config(model_config)
  File "/sgl-workspace/sglang/python/sglang/srt/managers/expert_location.py", line 352, in from_model_config
    model_class, _ = get_model_architecture(model_config)
  File "/sgl-workspace/sglang/python/sglang/srt/model_loader/utils.py", line 37, in get_model_architecture
    return ModelRegistry.resolve_model_cls(architectures)
  File "/sgl-workspace/sglang/python/sglang/srt/models/registry.py", line 65, in resolve_model_cls
    return self._raise_for_unsupported(architectures)
  File "/sgl-workspace/sglang/python/sglang/srt/models/registry.py", line 32, in _raise_for_unsupported
    raise ValueError(
ValueError: Model architectures ['modeling_grove_moe.GroveMoeForCausalLM'] are not supported for now. Supported architectures: dict_keys(['BaichuanForCausalLM', 'BertModel', 'Contriever', 'ChatGLMModel', 'CLIPModel', 'CohereForCausalLM', 'Cohere2ForCausalLM', 'DbrxForCausalLM', 'DeepseekForCausalLM', 'MultiModalityCausalLM', 'DeepseekV3ForCausalLMNextN', 'DeepseekV2ForCausalLM', 'DeepseekV3ForCausalLM', 'DeepseekVL2ForCausalLM', 'ExaoneForCausalLM', 'GemmaForCausalLM', 'Gemma2ForCausalLM', 'Gemma2ForSequenceClassification', 'Gemma3ForCausalLM', 'Gemma3ForConditionalGeneration', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GraniteForCausalLM', 'Grok1ForCausalLM', 'Grok1ModelForCausalLM', 'InternLM2ForCausalLM', 'InternLM2ForRewardModel', 'InternVLChatModel', 'KimiVLForConditionalGeneration', 'LlamaForCausalLM', 'Phi3ForCausalLM', 'InternLM3ForCausalLM', 'Llama4ForCausalLM', 'LlamaForClassification', 'LlamaForCausalLMEagle', 'LlamaForCausalLMEagle3', 'LlamaEmbeddingModel', 'MistralModel', 'LlamaForSequenceClassification', 'LlamaForSequenceClassificationWithNormal_Weights', 'LlavaLlamaForCausalLM', 'LlavaQwenForCausalLM', 'LlavaMistralForCausalLM', 'LlavaForConditionalGeneration', 'LlavaVidForCausalLM', 'MiMoForCausalLM', 'MiMoMTP', 'MiniCPMForCausalLM', 'MiniCPM3ForCausalLM', 'MiniCPMO', 'MiniCPMV', 'MistralForCausalLM', 'Mistral3ForConditionalGeneration', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MllamaForConditionalGeneration', 'Llama4ForConditionalGeneration', 'OlmoForCausalLM', 'Olmo2ForCausalLM', 'OlmoeForCausalLM', 'Phi3SmallForCausalLM', 'PixtralVisionModel', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2_5_VLForConditionalGeneration', 'Qwen2ForSequenceClassification', 'Qwen2ForCausalLMEagle', 'Qwen2MoeForCausalLM', 'Qwen2ForRewardModel', 'Qwen2VLForConditionalGeneration', 'Qwen3ForCausalLM', 'Qwen3MoeForCausalLM', 'XLMRobertaModel', 'StableLmForCausalLM', 'TorchNativeLlamaForCausalLM', 'TorchNativePhi3ForCausalLM', 'XverseForCausalLM', 'XverseMoeForCausalLM', 'YiVLForCausalLM'])

[2025-09-11 11:41:54] Received sigquit from a child process. It usually means the child failed.

The Dockerfile is as follows:

FROM sglang:v0.4.6.post5-cu124
COPY src/transformers-4.51.3 /root/src/transformers-4.51.3
COPY src/sglang-0.4.6.post5 /root/src/sglang-0.4.6.post5
RUN cd /root/src/transformers-4.51.3 && \
    rm -rf /sgl-workspace/sglang/python/sglang /sgl-workspace/sglang/python/sglang.egg-info/ &&
    pip install . && \
    cd /root/src/sglang-0.4.6.post5 && \
    pip install . --target /sgl-workspace/sglang/python/

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions