diff --git a/docs/marketplace/guides/troubleshooting.md b/docs/marketplace/guides/troubleshooting.md index 174ae083..53549afc 100644 --- a/docs/marketplace/guides/troubleshooting.md +++ b/docs/marketplace/guides/troubleshooting.md @@ -17,7 +17,7 @@ If the **Enter Marketplace** button is stuck in the loading state, open the Meta If MetaMask shows no requests, refresh the page and press the **Enter Marketplace** button again. -If the problem persists, restart your browser and log in to MetaMask before trying to enter the Marketplace. +If the problem persists, restart your browser and unlock MetaMask before trying to enter the Marketplace. ## Order status: Error @@ -27,7 +27,7 @@ The **Error** status means something went completely wrong, and the model was no

-Click the **Get Result** button to download the order log file containing technical information about the error. If the file is difficult to read, you can analyze it using a chatbot like Claude or ChatGPT. +Click the **Get Result** button to download the order log file containing technical information about the error. If the file is difficult to read, you can analyze it using a chatbot like Claude, ChatGPT, or similar. ### Trust remote code @@ -35,7 +35,7 @@ Some models may contain scripts without which the model will not run. If you dep An example of a model that relies on remote scripts is [cerebras/btlm-3b-8k-base](https://huggingface.co/cerebras/btlm-3b-8k-base). -To avoid this error, allow executing remote code in the engine settings before checkout using the following instructions: +To avoid this error, allow executing remote code in the engine settings using the following instructions. In the Order Builder, open the engine settings. diff --git a/docs/marketplace/guides/upload.md b/docs/marketplace/guides/upload.md index cadedd72..f7fc80ae 100644 --- a/docs/marketplace/guides/upload.md +++ b/docs/marketplace/guides/upload.md @@ -10,7 +10,7 @@ import TabItem from '@theme/TabItem'; This guide provides step-by-step instructions for uploading a model to Super Protocol. -## Step 1. Requirements for models +## Step 1. Check requirements Ensure your model meets the Super Protocol requirements. Otherwise, the order may fail. @@ -42,13 +42,13 @@ If your model is from Hugging Face, ensure its _task_ matches one of the support ### Model size -The total size of your model files must not exceed 10 GB; otherwise, deployment may fail. Larger slots to support bigger models will be available in the future. +The total size of your model files must not exceed 10 GB; otherwise, deployment may fail. More compute with larger slots to support bigger models will be available in the future. -Note that large models perform poorly on TDX CPU without GPU support. If you plan on deploying on CPU, choose a smaller model. +Note that large models may perform poorly on TDX machines without GPU support. If you plan on deploying on CPU, choose a smaller model. ## Step 2. Select files -Models consist of multiple files. Usually, not all of them are required. Select files following the instructions for your model's format. +Models consist of multiple files. Often, not all of them are required. Select files following the instructions for your model's format. ### GGUF format @@ -61,7 +61,7 @@ Choose one GGUF file and place it in a separate directory. For example: ### GGML format -Models in the GGML formal are quantized as well. +Models in the GGML formal are often quantized as well. Choose one BIN and all other files and place them in a separate directory. For example: @@ -70,15 +70,27 @@ Choose one BIN and all other files and place them in a separate directory. For e ### Safetensors formats -Place all files from the repository into a separate directory. +Place all files from the repository in a separate directory. For example: -Ensure to avoid duplications. If a single consolidated `model.safetensors` file and multiple `model-xxxxx-of-yyyyy.safetensors` files are available, select one set and remove the other. For example: + +
+
+ +Avoid duplications. Some repositories contain several variants of the same model. For example, one of the the highlighted files on the following screenshot can be removed: + + +
+
+ +If a single consolidated `model.safetensors` file and multiple `model-xxxxx-of-yyyyy.safetensors` files are available, choose one set and remove the other. For example, one of the highlighted sets of files on the following screenshot should be removed:

-If multiple formats are available, select one format and remove the others. For example: +### Multiple formats + +If multiple formats are available, choose one of them and remove the others. For example, one of the highlighted sets of files on the following screenshot should be removed:
@@ -142,9 +154,9 @@ Fill out the form: - **Content Name**: type the desired model name. It may be different from the file name. Providing a meaningful name makes it easier to find the model later. - **Category**: choose the model category from the drop-down menu. You can only choose one. -- **Engine**: choose compatible engines from the drop-down menu. It is recommended to choose both and then decide how you want to run the model during order creation: - + Engines marked as 'GPU only' run the model on an NVIDIA H100 Tensor Core GPU. - + Engines marked as 'CPU only' run the model on an Intel TDX CPU. +- **Engine**: choose compatible engines from the drop-down menu. You can choose both **CPU only** and **GPU only** variants and decide how you want to run the model later during order creation: + + Engines marked as **GPU only** run the model on an NVIDIA H100 Tensor Core GPU. + + Engines marked as **CPU only** run the model on an Intel TDX CPU.
diff --git a/docs/marketplace/images/hf-formats.png b/docs/marketplace/images/hf-formats.png index e174ce6e..e8e6354e 100644 Binary files a/docs/marketplace/images/hf-formats.png and b/docs/marketplace/images/hf-formats.png differ diff --git a/docs/marketplace/images/hf-safetensors-consolidated.png b/docs/marketplace/images/hf-safetensors-consolidated.png index 762a32ec..6650ebcc 100644 Binary files a/docs/marketplace/images/hf-safetensors-consolidated.png and b/docs/marketplace/images/hf-safetensors-consolidated.png differ diff --git a/docs/marketplace/images/hf-safetensors-duplicates.png b/docs/marketplace/images/hf-safetensors-duplicates.png new file mode 100644 index 00000000..68a36490 Binary files /dev/null and b/docs/marketplace/images/hf-safetensors-duplicates.png differ diff --git a/docs/marketplace/images/hf-safetensors.png b/docs/marketplace/images/hf-safetensors.png new file mode 100644 index 00000000..3c6aee96 Binary files /dev/null and b/docs/marketplace/images/hf-safetensors.png differ diff --git a/docs/marketplace/images/stuck-login.png b/docs/marketplace/images/stuck-login.png index 91a45f4c..842ec2bc 100644 Binary files a/docs/marketplace/images/stuck-login.png and b/docs/marketplace/images/stuck-login.png differ diff --git a/docs/marketplace/limitations.md b/docs/marketplace/limitations.md index 2606a556..e8a0bdfd 100644 --- a/docs/marketplace/limitations.md +++ b/docs/marketplace/limitations.md @@ -5,37 +5,39 @@ slug: "/limitations" sidebar_position: 1 --- -The testnet has a limited amount of computing resources. To ensure fair access, Super Protocol has set limits on CPU/GPU compute configurations and token availability, allowing everyone to participate. Additional NVIDIA H100 machines will be added soon. +The testnet has a limited amount of computing resources. To ensure fair access, Super Protocol has set limits on CPU/GPU compute configurations and token availability, allowing everyone to participate. ## Pricing for orders -On the testnet, you can deploy models using either Intel TDX CPUs or NVIDIA H100 GPUs. While GPUs are significantly faster, CPUs are cheaper and have more availability. The current public machines include: +On the testnet, you can deploy models using either Intel TDX CPUs or NVIDIA H100 Tensor Core GPUs. -- **Super 2: TDX+H100 (Public)** can run orders on GPU. -- **Super 3: TDX (Public)** can only run orders on CPU. +Super Protocol has two types of compute: + +- **TDX+H100** machines can run orders on GPU or CPU, depending on the selected engine type. +- **TDX** machines can only run orders on CPU. Note that this mode is much slower. + +Super Protocol constantly adds more TDX+H100 machines and will soon begin onboarding machines from third-party providers. Pricing and restrictions: -- Order lease time: minimum of 2 hours and a maximum of 4 hours. -- Price per hour: - + CPU: 2.13 TEE tokens/hour. - + GPU: 4.126 TEE tokens/hour. +- Order lease time: minimum 2 hours and maximum 4 hours. +- Compute costs 4.134 TEE tokens per hour. - Models from the Marketplace cost 1 TEE per order. - Engines cost 0.5 TEE per order. -- Tunnels Launcher fee to set up a confidential tunnel: approximately 1-2 TEE per order. +- Tunnels Launcher fee to set up a confidential tunnel is approximately 1-2 TEE per order. For example, a four-hour GPU order will cost approximately: -4.126 * 4 + 1 + 0.5 + 2 = **~19-20 TEE** tokens. +4.134 * 4 + 1 + 0.5 + 2 = **~19-20 TEE** tokens. ## Token limits -- **Demo users** receive a one-time advance of 50 TEE tokens with no possibility to replenish. To continue testing, users must [log in with a Web3 account](/marketplace/guides/log-in). -- **Web3 users** can receive up to 25 TEE tokens and 5 POL tokens per day, which can be replenished daily. At any given time, Web3 users can hold a maximum of 25 TEE tokens and 5 POL tokens in their wallets. +- **Demo users** receive a one-time advance of 50 TEE tokens with no replenishment possibility. To continue testing, users must [log in with a Web3 account](/marketplace/guides/log-in). +- **Web3 users** can receive up to 25 TEE tokens and 5 POL tokens daily. At any given time, Web3 users can hold a maximum of 25 TEE tokens and 5 POL tokens in their wallets. ## Model limits -The total size of your model files must not exceed 10 GB; otherwise, deployment may fail. Larger slots to support bigger models will be available in the future. +The total size of your model files should not exceed 10 GB; otherwise, deployment may fail. More compute with larger slots to support bigger models will be available in the future. Also, deployed models must belong to a category supported by one of the AI engines: