diff --git a/docs/marketplace/Guides/prepare-comfyui.md b/docs/cli/Guides/Solutions/comfyui.md similarity index 91% rename from docs/marketplace/Guides/prepare-comfyui.md rename to docs/cli/Guides/Solutions/comfyui.md index 5b8dc8eb..aa1cab11 100644 --- a/docs/marketplace/Guides/prepare-comfyui.md +++ b/docs/cli/Guides/Solutions/comfyui.md @@ -1,14 +1,14 @@ --- -id: "prepare-comfyui" -title: "Prepare a ComfyUI Workflow" -slug: "/guides/prepare-comfyui" -sidebar_position: 5 +id: "comfyui" +title: "ComfyUI" +slug: "/guides/solutions/comfyui" +sidebar_position: 2 --- import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes before uploading it. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. +This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes to run on Super Protocol. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. :::note @@ -28,7 +28,7 @@ You can prepare your model, workflow, and custom node files manually or using Do 1. Clone the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions/) GitHub repository to the location of your choosing: - ``` + ```shell git clone https://github.com/Super-Protocol/solutions.git --depth 1 ``` @@ -54,13 +54,13 @@ You can prepare your model, workflow, and custom node files manually or using Do Access the running container with the following command: - ``` + ```shell docker exec -it comfyui bash ``` Go to the `models` directory inside the container and download the model files to the corresponding subdirectories using the `wget` command. For example: - ``` + ```shell wget https://huggingface.co/prompthero/openjourney/resolve/main/mdjrny-v4.safetensors ``` @@ -68,7 +68,7 @@ You can prepare your model, workflow, and custom node files manually or using Do If you have the model on your computer, copy its files to the container using the following command: - ``` + ```shell docker cp comfyui: ``` @@ -77,7 +77,7 @@ You can prepare your model, workflow, and custom node files manually or using Do For example: - ``` + ```shell docker cp ~/Downloads/openjourney/mdjrny-v4.safetensors comfyui:/opt/ComfyUI/models/checkpoints/mdjrny-v4.safetensors ``` @@ -87,7 +87,7 @@ You can prepare your model, workflow, and custom node files manually or using Do 8. Unpack the archive using the following command: - ``` + ```shell tar -xvzf snapshot.tar.gz -C ``` @@ -159,6 +159,6 @@ You can prepare your model, workflow, and custom node files manually or using Do -## Contact Super Protocol +## Support -If you face any issues, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/Solutions/tgwui.md b/docs/cli/Guides/Solutions/tgwui.md new file mode 100644 index 00000000..d6ce3bb0 --- /dev/null +++ b/docs/cli/Guides/Solutions/tgwui.md @@ -0,0 +1,6 @@ +--- +id: "tgwui" +title: "Text Generation WebUI" +slug: "/guides/solutions/tgwui" +sidebar_position: 1 +--- \ No newline at end of file diff --git a/docs/cli/Guides/Solutions/unsloth.md b/docs/cli/Guides/Solutions/unsloth.md new file mode 100644 index 00000000..02d0b996 --- /dev/null +++ b/docs/cli/Guides/Solutions/unsloth.md @@ -0,0 +1,189 @@ +--- +id: "unsloth" +title: "Unsloth" +slug: "/guides/solutions/unsloth" +sidebar_position: 3 +--- + +This guide provides step-by-step instructions for fine-tuning an AI model using the Super Protocol packaging of [Unsloth](https://unsloth.ai/), an open-source framework for LLM fine-tuning and reinforcement learning. + +The solution allows you to run fine-tuning within Super Protocol's Trusted Execution Environment (TEE). This provides enhanced security and privacy and enables a range of [confidential collaboration](https://docs.develop.superprotocol.com/cli/guides/fine-tune) scenarios. + +## Prerequisites + +- [SPCTL](https://docs.develop.superprotocol.com/cli/) +- Git +- BNB and SPPI tokens (opBNB) to pay for transactions and orders + +## Repository + +Clone the repository with Super Protocol solutions: + +```shell +git clone https://github.com/Super-Protocol/solutions.git +``` + +The Unsloth solution includes a Dockerfile and a helper script `run-unsloth.sh` that facilitates workflow creation. Note that `run-unsloth.sh` does not build an image and instead uses a pre-existing solution offer. + +## run-unsloth.sh + +Copy SPCTL’s binary and its `config.json` to the `unsloth/scripts` directory inside the cloned Super-Protocol/solutions repository. + +### 1. Prepare training scripts + +When preparing your training scripts, keep in mind the special file structure within the TEE: + +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations
(AI model, dataset, training scripts, etc.) | Read-only | +| `/sp/output` | Output directory for results | Read and write | +| `/sp/certs` | Contains the order certificate, private key, and `workloadInfo` | Read-only | + +Your scripts must find the data in `/sp/inputs` and write the results to `/sp/output`. + +### 2. Place an order + +2.1. Initiate a dialog to construct and place an order: + +```shell +./run-unsloth.sh +``` + +2.2. `Enter TEE offer id (number)`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +2.3. `Choose run mode`: `1) file`. + +2.4. `Select the model option`: + +- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B. +- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following: + - a path to the model's resource JSON file, if it was already uploaded with SPCTL + - model offer ID, if the model exists on the Marketplace + - a path to the local directory with the model to upload it using SPCTL. +- `3) no model`: No model will be used. + +2.5. `Enter path to a .py/.ipynb file OR a directory`: Enter the path to your training script (file or directory). For a directory, select the file to run (entrypoint) when prompted. Note that you cannot reuse resource files in this step; scripts should be uploaded every time. + +2.6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following: + +- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL +- dataset offer ID, if the dataset exists on the Marketplace +- a path to the local directory with the dataset to upload it using SPCTL. + +2.7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories. + +2.8. Wait for the order to be created and find the order ID in the output, for example: + +```shell +Unsloth order id: 259126 +Done. +``` + +### 3. Check the order result + +3.1. The order will take some time to complete. Check the order status: + +```shell +./spctl orders get +``` + +Replace `` with your order ID. + +If you lost the order ID, check all your orders to find it: + +```shell +./spctl orders list --my-account --type tee +``` + +3.2. When the order status is `Done` or `Error`, download the result: + +```shell +./spctl orders download-result +``` + +The downloaded TAR.GZ archive contains the results in the `output` directory and execution logs. + +## Dry run + +```shell +./run-unsloth.sh --suggest-only +``` + +The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders. + +Complete the dialog, as usual; only use absolute paths. + +In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example: + +```shell +RUN_MODE=file \ +RUN_DIR=/home/user/Downloads/yma-run \ +RUN_FILE=sft_example.py \ +DATA_RESOURCE=/home/user/unsloth/scripts/yma_data_example-data.json \ +MODEL_RESOURCE=/home/user/unsloth/scripts/medgemma-27b-ft-merged.resource.json \ +/home/user/unsloth/scripts/run-unsloth.sh \ +--tee 8 \ +--config ./config.json +``` + +## Jupyter Notebook + +You can launch and use Jupyter Notebook instead of uploading training scripts directly. + +Initiate a dialog: + +```shell +./run-unsloth.sh +``` + +When prompted: + +1. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +2. `Choose run mode`: `2) jupyter-server`. + +3. `Select the model option`: + +- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B. +- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following: + - a path to the model's resource JSON file, if it was already uploaded with SPCTL + - model offer ID, if the model exists on the Marketplace + - a path to the local directory with the model to upload it using SPCTL. +- `3) no model`: No model will be used. + +4. `Enter Jupyter password` or press Enter to proceed without a password. + +5. `Select domain option`: + +- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments. +- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token. + +Wait for the Tunnels Launcher order to be created. + +6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following: +- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL +- dataset offer ID, if the dataset exists on the Marketplace +- a path to the local directory with the dataset to upload it using SPCTL. + +7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories. + +8. Wait for the Jupyter order to be ready and find a link in the output; for example: + +```shell +=================================================== +Jupyter instance is available at: https://beja-bine-envy.superprotocol.io +=================================================== +``` + +8. Open the link in your browser to access Jupyter’s UI. + +**Note**: + +The data in `/sp/output` will not be published as the order result when running the Jupyter server. To save your fine-tuning results, upload them either: +- via Python code +- using the integrated terminal in the Jupyter server +- using SPCTL with the config uploaded at Step 7. + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/Solutions/vllm.md b/docs/cli/Guides/Solutions/vllm.md new file mode 100644 index 00000000..cda0ab5d --- /dev/null +++ b/docs/cli/Guides/Solutions/vllm.md @@ -0,0 +1,101 @@ +--- +id: "vllm" +title: "vLLM" +slug: "/guides/solutions/vllm" +sidebar_position: 4 +--- + +This guide provides step-by-step instructions for running an AI model inference using the Super Protocol packaging of [vLLM](https://www.vllm.ai/), an inference and serving engine for LLMs. + +The solution allows you to run LLM inference within Super Protocol's Trusted Execution Environment (TEE). + +## Prerequisites + +- [SPCTL](https://docs.develop.superprotocol.com/cli/) +- Git +- BNB and SPPI tokens (opBNB) to pay for transactions and orders + +## Repository + +Clone the repository with Super Protocol solutions: + +```shell +git clone https://github.com/Super-Protocol/solutions.git +``` + +The vLLM solution includes a Dockerfile and a helper script `run-vllm.sh` that facilitates workflow creation. Note that `run-vllm.sh` does not build an image and instead uses a pre-existing solution offer. + +## run-vllm.sh + +Copy SPCTL’s binary and its `config.json` to the `vllm/scripts` directory inside the cloned Super-Protocol/solutions repository. + +### Place an order + +1. Initiate a dialog to construct and place an order: + +```shell +./run-vllm.sh +``` + +2. `Select domain option`: + +- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments. +- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token. + +3. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/). + +4. `Provide model as resource JSON path, numeric offer id, or folder path`: Enter one of the following: + +- a path to the model's resource JSON file, if it was already uploaded with SPCTL +- model offer ID, if the model exists on the Marketplace +- a path to the local directory with the model to upload it using SPCTL. + +5. `Enter API key` or press `Enter` to generate one automatically. + +Wait for the deployment to be ready and find the information about it in the output, for example: + +```shell +=================================================== +VLLM server is available at: https://whau-trug-nail.superprotocol.io +API key: d75c577d-e538-4d09-8f59-a0f00ae961a3 +Order IDs: Launcher=269042, VLLM=269044 +=================================================== +``` + +### API + +Once deployed on Super Protocol, your model runs inside a TEE and exposes an OpenAI-compatible API. You can interact with it as you would with a local vLLM instance. + +Depending on the type of request you want to make, use the following API endpoints: + +- Chat Completions (`/v1/chat/completions`) +- Text Completions (`/v1/completions`) +- Embeddings (`/v1/embeddings`) +- Audio Transcriptions & Translations (`/v1/audio/transcriptions`, `/v1/audio/translations`) + +See the [full list of API endpoints](https://docs.vllm.ai/en/latest/serving/openai_compatible_server/). + +## Dry run + +```shell +./run-vllm.sh --suggest-only +``` + +The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders. + +Complete the dialog, as usual; only use absolute paths. + +In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example: + +```shell +RUN_MODE=temporary \ +MODEL_RESOURCE=55 \ +VLLM_API_KEY=9c6dbf44-cef7-43a4-b362-43295b244446 \ +/home/user/vllm/scripts/run-vllm.sh \ +--config ./config.json \ +--tee 8 +``` + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/Guides/collaboration.md b/docs/cli/Guides/collaboration.md index c0fecb6f..b6540bab 100644 --- a/docs/cli/Guides/collaboration.md +++ b/docs/cli/Guides/collaboration.md @@ -92,11 +92,11 @@ Both Alice and Bob can retrieve the order report ([11](/cli/guides/collaboration 1.1. Write a Dockerfile that creates an image with your code. Keep in mind the special file structure inside the TEE: -| **Location** | **Purpose** | **Access** | -| :- | :- | :- | -| `/sp/inputs/input-0001/`
`/sp/inputs/input-0002/`
etc. | Possible data locations | Read-only | -| `/sp/output/` | Output directory for results | Write; read own files | -| `/sp/certs/` | Contains the order certificate | Read-only | +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001/`
`/sp/inputs/input-0002/`
etc. | Possible data locations | Read-only | +| `/sp/output/` | Output directory for results | Write; read own files | +| `/sp/certs/` | Contains the order certificate, private key, and workloadInfo | Read-only | Your scripts must find the data in `/sp/inputs/` and write the results to `/sp/output/`. diff --git a/docs/cli/Guides/confidential-fine-tuning.md b/docs/cli/Guides/confidential-fine-tuning.md index c3832876..f68d7f63 100644 --- a/docs/cli/Guides/confidential-fine-tuning.md +++ b/docs/cli/Guides/confidential-fine-tuning.md @@ -1,21 +1,21 @@ --- id: "fine-tune" -title: "Confidential Fine-Tuning" +title: "Confidential Collaboration" slug: "/guides/fine-tune" sidebar_position: 3 --- Super Protocol enables independent parties to jointly compute over their private inputs without revealing those inputs to one another. -This guide describes an example of confidential collaboration on Super Protocol: a fine-tuning of a pre-trained AI model. The scenario involves three parties: +This guide describes a scenario of confidential collaboration on Super Protocol. It uses fine-tuning of a pre-trained AI model as an example, but the general principle presented here may be applied to other cases. + +The scenario involves three parties: - **Alice** owns the AI model. - **Bob** owns the dataset. - **Carol** provides the training engine and scripts. -Neither Alice nor Bob is willing to share their intellectual property with other parties. At the same time, Carol must share her training engine and scripts with both parties so they can verify that the code is safe to run on their data. - -If Carol's training engine or scripts are proprietary and she cannot share them with Alice and Bob, a possible alternative is to involve independent security experts who can audit the code without exposing it publicly. +Neither Alice nor Bob is willing to share their intellectual property with other parties. At the same time, Carol must share her training engine and scripts with both parties so they can verify that the code is safe to run on their data. If Carol's training engine or scripts are proprietary and she cannot share them with Alice and Bob, a possible alternative is to involve independent security experts who can audit the code without exposing it publicly. The computation runs on Super Protocol within a Trusted Execution Environment that is isolated from all external access, including that of Alice, Bob, Carol, the hardware owner, and the Super Protocol team. Additionally, Super Protocol's Certification System provides verifiability, eliminating the need for trust. @@ -124,11 +124,11 @@ Both Alice and Bob can retrieve the order report ([12](/cli/guides/fine-tune#ali Keep in mind the special file structure inside the TEE: -| **Location** | **Purpose** | **Access** | -| :- | :- | :- | -| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations
(AI model, dataset, training scripts, etc.) | Read-only | -| `/sp/output` | Output directory for results | Write; read own files | -| `/sp/certs` | Contains the order certificate | Read-only | +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations
(AI model, dataset, training scripts, etc.) | Read-only | +| `/sp/output` | Output directory for results | Read and write | +| `/sp/certs` | Contains the order certificate, private key, and workloadInfo | Read-only | Your solution must find the data in `/sp/inputs` and write the results to `/sp/output`. diff --git a/docs/cli/Guides/provider-tools.md b/docs/cli/Guides/provider-tools.md index b6cd6b72..3b18dd60 100644 --- a/docs/cli/Guides/provider-tools.md +++ b/docs/cli/Guides/provider-tools.md @@ -36,7 +36,7 @@ chmod +x ./provider-tools -## Set Up +## Set up ```shell ./provider-tools setup diff --git a/docs/cli/Guides/quick-guide.md b/docs/cli/Guides/quick-guide.md index 801e8523..0092c4a8 100644 --- a/docs/cli/Guides/quick-guide.md +++ b/docs/cli/Guides/quick-guide.md @@ -18,11 +18,11 @@ This quick guide provides instructions on deploying a TEE: -| **Location** | **Purpose** | **Access** | -| :- | :- | :- | -| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations | Read-only | -| `/sp/output` | Output directory for results | Write; read own files | -| `/sp/certs` | Contains the order certificate | Read-only | +| **Location** | **Purpose** | **Access** | +| :- | :- | :- | +| `/sp/inputs/input-0001`
`/sp/inputs/input-0002`
etc. | Possible data locations | Read-only | +| `/sp/output` | Output directory for results | Write; read own files | +| `/sp/certs` | Contains the order certificate, private key, and workloadInfo | Read-only | So, your solution must find the data in `/sp/inputs` and write the results to `/sp/output`. @@ -156,4 +156,8 @@ For example: ```shell ./spctl orders download-result 256587 -``` \ No newline at end of file +``` + +## Support + +If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/docs/cli/index.md b/docs/cli/index.md index 39e44e41..c2d013d9 100644 --- a/docs/cli/index.md +++ b/docs/cli/index.md @@ -41,7 +41,7 @@ import TabItem from '@theme/TabItem'; You can also download and install SPCTL manually from the Super Protocol [GitHub repository](https://github.com/Super-Protocol/ctl). -## Set Up +## Set up You can set up SPCTL using the `./spctl setup` command or by manually creating a configuration file. @@ -103,7 +103,6 @@ You can set up SPCTL using the `./spctl setup` command or by manually creating a } } } - ``` 3. Do not change the preconfigured values and set values to the following keys: diff --git a/docs/guides/index.md b/docs/guides/index.md index 69d2473a..dd581766 100644 --- a/docs/guides/index.md +++ b/docs/guides/index.md @@ -7,18 +7,29 @@ sidebar_position: 0 ## Marketplace GUI -|
**Guide**
|
**Description**
| +|
**Guide**
|
**Description**
| | :- | :- | | [Log In with MetaMask](/marketplace/guides/log-in) | How to log in to the [Marketplace](https://marketplace.superprotocol.com/) using MetaMask. | | [Log In with Trust Wallet](/marketplace/guides/log-in-trustwallet) | How to log in to the Marketplace using Trust Wallet. | | [Deploy Your Model](/marketplace/guides/deploy-model) | How to upload and deploy an AI model on Super Protocol. | | [Publish an Offer](/marketplace/guides/publish-offer) | How to upload an AI model and publish it on the Marketplace. | -| [Prepare a ComfyUI Workflow](/marketplace/guides/prepare-comfyui) | How to prepare a ComfyUI workflow with custom nodes. | | [Set Up Personal Storage](/marketplace/guides/storage) | How to set up your personal Storj account. | | [Troubleshooting](/marketplace/guides/troubleshooting) | Most common issues and ways to fix them. | ## CLI -|
**Guide**
|
**Description**
| +|
**Guide**
|
**Description**
| +| :- | :- | +| [Configure SPCTL](/cli) | How to set up SPCTL—a Super Protocol CLI tool. | +| [Configure Provider Tools](/cli/guides/provider-tools) | How to set up Provider Tools—a Super Protocol CLI utility for registering providers and creating offers. | +| [Quick Deployment Guide](/cli/guides/quick-guide) | Quick instructions on deploying a solution and data on Super Protocol. | +| [Confidential Collaboration](/cli/guides/fine-tune) | A scenario of confidential collaboration on Super Protocol. | + +### Solutions + +|
**Guide**
|
**Description**
| | :- | :- | -| [Quick Deployment Guide](/cli/guides/quick-guide) | Quick instructions on deploying a solution and data on Super Protocol. | \ No newline at end of file +| [Text Generation WebUI](/cli/guides/solutions/tgwui) | How to deploy a model using Text Generation WebUI. | +| [ComfyUI](/cli/guides/solutions/comfyui) | How to prepare a ComfyUI workflow with custom nodes. | +| [Unsloth](/cli/guides/solutions/unsloth) | How to fine-tune an AI model using the Super Protocol packaging of Unsloth. | +| [vLLM](/cli/guides/solutions/vllm) | How to run a model inference using the Super Protocol packaging of vLLM. | \ No newline at end of file diff --git a/docs/marketplace/Guides/deploy-model.md b/docs/marketplace/Guides/deploy-model.md index f5b6a77e..d3b0b389 100644 --- a/docs/marketplace/Guides/deploy-model.md +++ b/docs/marketplace/Guides/deploy-model.md @@ -33,7 +33,7 @@ Ensure your model meets the Super Protocol requirements: - Text-to-Video - Mask Generation -If you plan to deploy a ComfyUI workflow with custom nodes, [prepare the files](/marketplace/guides/prepare-comfyui) before uploading. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. +If you plan to deploy a ComfyUI workflow with custom nodes, [prepare the files](/cli/guides/solutions/comfyui) before uploading. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI. 1.2. Due to [testnet limitations](/marketplace/limitations), the total size of model files should not exceed 13 GB. Support for bigger models will be available in the future.