@@ -112,25 +112,27 @@ supported on Twinkle✨ framework.
112112> both Tinker APIs, as well as the full-fledged Twinkle✨ native APIs. The serverless endpoint is backed
113113> by one training base at a time, and currently it is [ Qwen3-30B-A3B-Instruct-2507] ( https://modelscope.cn/models/Qwen/Qwen3-30B-A3B-Instruct-2507 ) .
114114
115- | Model Type | Model ID on [ ModelScope] ( https://modelscope.cn ) | Model Size | Requires | Support Megatron | HF Model ID |
116- | ------------------- | ------------------------------------------------------------ | :-------------------------------------: | -------------------- | :--------------: | :----------------------------------------------------------: |
117- | qwen3 series | [ Qwen/Qwen3-14B-Base] ( https://modelscope.cn/models/Qwen/Qwen3-14B-Base ) | 0.6B/1.7B/4B/8B/14B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-14B-Base] ( https://huggingface.co/Qwen/Qwen3-14B-Base ) |
118- | | [ Qwen/Qwen3-32B] ( https://modelscope.cn/models/Qwen/Qwen3-32B ) | 0.6B/1.7B/4B/8B/14B/32B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-32B] ( https://huggingface.co/Qwen/Qwen3-32B ) |
119- | qwen3_moe series | [ Qwen/Qwen3-30B-A3B-Base] ( https://modelscope.cn/models/Qwen/Qwen3-30B-A3B-Base ) | 30B-A3B/A3B-Base,235B-A22B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-30B-A3B-Base] ( https://huggingface.co/Qwen/Qwen3-30B-A3B-Base ) |
120- | qwen2 series | [ Qwen/Qwen2-0.5B-Instruct] ( https://modelscope.cn/models/Qwen/Qwen2-0.5B-Instruct ) | 0.5B/1.5B/7B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2-0.5B-Instruct] ( https://huggingface.co/Qwen/Qwen2-0.5B-Instruct ) |
121- | | [ Qwen/Qwen2-1.5B] ( https://modelscope.cn/models/Qwen/Qwen2-1.5B ) | 0.5B/1.5B/7B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2-1.5B] ( https://huggingface.co/Qwen/Qwen2-1.5B ) |
122- | | [ Qwen/Qwen2.5-1.5B-Instruct] ( https://modelscope.cn/models/Qwen/Qwen2.5-1.5B-Instruct ) | 0.5B/1.5B/3B/7B/14B/32B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2.5-1.5B-Instruct] ( https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct ) |
123- | | [ Qwen/Qwen2.5-0.5B] ( https://modelscope.cn/models/Qwen/Qwen2.5-0.5B ) | 0.5B/1.5B/3B/7B/14B/32B | transformers>=4.37 | ✔ | [ Qwen/Qwen2.5-0.5B] ( https://huggingface.co/Qwen/Qwen2.5-0.5B ) |
124- | qwen2_moe series | [ Qwen/Qwen1.5-MoE-A2.7B-Chat] ( https://modelscope.cn/models/Qwen/Qwen1.5-MoE-A2.7B-Chat ) | - | transformers>=4.40 | ✔ | [ Qwen/Qwen1.5-MoE-A2.7B-Chat] ( https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat ) |
125- | | [ Qwen/Qwen1.5-MoE-A2.7B] ( https://modelscope.cn/models/Qwen/Qwen1.5-MoE-A2.7B ) | - | transformers>=4.40 | ✔ | [ Qwen/Qwen1.5-MoE-A2.7B] ( https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B ) |
126- | chatglm3 series | [ ZhipuAI/chatglm3-6b] ( https://modelscope.cn/models/ZhipuAI/chatglm3-6b ) | 6b/6b-base/6b-32k/6b-128k | transformers<4.42 | ✘ | [ zai-org/chatglm3-6b] ( https://huggingface.co/zai-org/chatglm3-6b ) |
127- | chatglm4 series | [ ZhipuAI/glm-4-9b-chat] ( https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat ) | glm-4-9b/glm-4-9b-chat/glm-4-9b-chat-1m | transformers>=4.42 | ✘ | [ zai-org/glm-4-9b-chat] ( https://huggingface.co/zai-org/glm-4-9b-chat ) |
128- | | [ ZhipuAI/LongWriter-glm4-9b] ( https://modelscope.cn/models/ZhipuAI/LongWriter-glm4-9b ) | - | transformers>=4.42 | ✘ | [ zai-org/LongWriter-glm4-9b] ( https://huggingface.co/zai-org/LongWriter-glm4-9b ) |
129- | glm_edge series | [ ZhipuAI/glm-edge-1.5b-chat] ( https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat ) | 1.5b-chat/4b-chat | transformers>=4.46 | ✘ | [ zai-org/glm-edge-1.5b-chat] ( https://huggingface.co/zai-org/glm-edge-1.5b-chat ) |
130- | internlm2 series | [ Shanghai_AI_Laboratory/internlm2-1_8b] ( https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-1_8b ) | 1_8b/chat-1_8b-sft/base-7b/7b/chat-7b/ | transformers>=4.38 | ✘ | [ internlm/internlm2-1_8b] ( https://huggingface.co/internlm/internlm2-1_8b ) |
131- | deepseek_v1 | [ deepseek-ai/DeepSeek-V2-Lite] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Lite ) | V2/V2-Lite/V2-Chat/2-Lite-Chat/V2.5 | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-V2-Lite] ( https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite ) |
132- | | [ deepseek-ai/DeepSeek-Prover-V2-7B] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-Prover-V2-7B ) | - | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-Prover-V2-7B] ( https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B ) |
133- | | [ deepseek-ai/DeepSeek-R1] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-R1 ) | - | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-R1] ( https://huggingface.co/deepseek-ai/DeepSeek-R1 ) |
115+ | Model Type | Model ID on [ ModelScope] ( https://modelscope.cn ) | Model Size | Requires | Support Megatron | HF Model ID |
116+ | ---------------------| -----------------------------------------------------------------------------------------------------------------| :---------------------------------------:| ----------------------| :----------------:| :---------------------------------------------------------------------------------------------------------:|
117+ | qwen3 series | [ Qwen/Qwen3-14B-Base] ( https://modelscope.cn/models/Qwen/Qwen3-14B-Base ) | 0.6B/1.7B/4B/8B/14B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-14B-Base] ( https://huggingface.co/Qwen/Qwen3-14B-Base ) |
118+ | | [ Qwen/Qwen3-32B] ( https://modelscope.cn/models/Qwen/Qwen3-32B ) | 0.6B/1.7B/4B/8B/14B/32B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-32B] ( https://huggingface.co/Qwen/Qwen3-32B ) |
119+ | qwen3_moe series | [ Qwen/Qwen3-30B-A3B-Base] ( https://modelscope.cn/models/Qwen/Qwen3-30B-A3B-Base ) | 30B-A3B/A3B-Base,235B-A22B | transformers>=4.51 | ✔ | [ Qwen/Qwen3-30B-A3B-Base] ( https://huggingface.co/Qwen/Qwen3-30B-A3B-Base ) |
120+ | qwen3.5 moe series | [ Qwen/Qwen3.5-35B-A3B] ( https://www.modelscope.cn/models/Qwen/Qwen3.5-35B-A3B ) | 35B-A3B,122B-A10B, etc. | transformers>=5.20 | ✔ | [ Qwen/Qwen3.5-35B-A3B] ( https://huggingface.co/Qwen/Qwen3.5-35B-A3B ) |
121+ | qwen3.5 series | [ Qwen/Qwen3.5-9B] ( https://www.modelscope.cn/models/Qwen/Qwen3.5-9B ) | 2B ~ 27B | transformers>=5.20 | ✔ | [ Qwen/Qwen3.5-9B] ( https://huggingface.co/Qwen/Qwen3.5-9B ) |
122+ | qwen2 series | [ Qwen/Qwen2-0.5B-Instruct] ( https://modelscope.cn/models/Qwen/Qwen2-0.5B-Instruct ) | 0.5B/1.5B/7B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2-0.5B-Instruct] ( https://huggingface.co/Qwen/Qwen2-0.5B-Instruct ) |
123+ | | [ Qwen/Qwen2-1.5B] ( https://modelscope.cn/models/Qwen/Qwen2-1.5B ) | 0.5B/1.5B/7B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2-1.5B] ( https://huggingface.co/Qwen/Qwen2-1.5B ) |
124+ | | [ Qwen/Qwen2.5-1.5B-Instruct] ( https://modelscope.cn/models/Qwen/Qwen2.5-1.5B-Instruct ) | 0.5B/1.5B/3B/7B/14B/32B/72B | transformers>=4.37 | ✔ | [ Qwen/Qwen2.5-1.5B-Instruct] ( https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct ) |
125+ | | [ Qwen/Qwen2.5-0.5B] ( https://modelscope.cn/models/Qwen/Qwen2.5-0.5B ) | 0.5B/1.5B/3B/7B/14B/32B | transformers>=4.37 | ✔ | [ Qwen/Qwen2.5-0.5B] ( https://huggingface.co/Qwen/Qwen2.5-0.5B ) |
126+ | qwen2_moe series | [ Qwen/Qwen1.5-MoE-A2.7B-Chat] ( https://modelscope.cn/models/Qwen/Qwen1.5-MoE-A2.7B-Chat ) | - | transformers>=4.40 | ✔ | [ Qwen/Qwen1.5-MoE-A2.7B-Chat] ( https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat ) |
127+ | | [ Qwen/Qwen1.5-MoE-A2.7B] ( https://modelscope.cn/models/Qwen/Qwen1.5-MoE-A2.7B ) | - | transformers>=4.40 | ✔ | [ Qwen/Qwen1.5-MoE-A2.7B] ( https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B ) |
128+ | chatglm3 series | [ ZhipuAI/chatglm3-6b] ( https://modelscope.cn/models/ZhipuAI/chatglm3-6b ) | 6b/6b-base/6b-32k/6b-128k | transformers<4.42 | ✘ | [ zai-org/chatglm3-6b] ( https://huggingface.co/zai-org/chatglm3-6b ) |
129+ | chatglm4 series | [ ZhipuAI/glm-4-9b-chat] ( https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat ) | glm-4-9b/glm-4-9b-chat/glm-4-9b-chat-1m | transformers>=4.42 | ✘ | [ zai-org/glm-4-9b-chat] ( https://huggingface.co/zai-org/glm-4-9b-chat ) |
130+ | | [ ZhipuAI/LongWriter-glm4-9b] ( https://modelscope.cn/models/ZhipuAI/LongWriter-glm4-9b ) | - | transformers>=4.42 | ✘ | [ zai-org/LongWriter-glm4-9b] ( https://huggingface.co/zai-org/LongWriter-glm4-9b ) |
131+ | glm_edge series | [ ZhipuAI/glm-edge-1.5b-chat] ( https://modelscope.cn/models/ZhipuAI/glm-edge-1.5b-chat ) | 1.5b-chat/4b-chat | transformers>=4.46 | ✘ | [ zai-org/glm-edge-1.5b-chat] ( https://huggingface.co/zai-org/glm-edge-1.5b-chat ) |
132+ | internlm2 series | [ Shanghai_AI_Laboratory/internlm2-1_8b] ( https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-1_8b ) | 1_8b/chat-1_8b-sft/base-7b/7b/chat-7b/ | transformers>=4.38 | ✘ | [ internlm/internlm2-1_8b] ( https://huggingface.co/internlm/internlm2-1_8b ) |
133+ | deepseek_v1 | [ deepseek-ai/DeepSeek-V2-Lite] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-V2-Lite ) | V2/V2-Lite/V2-Chat/2-Lite-Chat/V2.5 | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-V2-Lite] ( https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite ) |
134+ | | [ deepseek-ai/DeepSeek-Prover-V2-7B] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-Prover-V2-7B ) | - | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-Prover-V2-7B] ( https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B ) |
135+ | | [ deepseek-ai/DeepSeek-R1] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-R1 ) | - | transformers>=4.39.3 | ✔ | [ deepseek-ai/DeepSeek-R1] ( https://huggingface.co/deepseek-ai/DeepSeek-R1 ) |
134136| deepSeek-r1-distill | [ deepseek-ai/DeepSeek-R1-Distill-Qwen-7B] ( https://modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B ) | 1.5B/7B/14B/32B | transformers>=4.37 | ✔ | [ deepseek-ai/DeepSeek-R1-Distill-Qwen-7B] ( https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B ) |
135137
136138For more detailed model support list 👉 [ Quick Start] ( docs/source_en/Usage%20Guide/Quick-Start.md )
@@ -159,7 +161,7 @@ twinkle.initialize(mode='ray', groups=device_group, global_device_mesh=device_me
159161
160162def train ():
161163 # to load model from Hugging Face, use 'hf://...'
162- base_model = ' ms://Qwen/Qwen3-4B'
164+ base_model = ' ms://Qwen/Qwen3.5 -4B'
163165 # 1000 samples
164166 dataset = Dataset(dataset_meta = DatasetMeta(' ms://swift/self-cognition' , data_slice = range (1000 )))
165167 # Set template to prepare encoding
0 commit comments