diff --git a/docs/sd_demo.rst b/docs/sd_demo.rst index 40f546e1..a3b53577 100644 --- a/docs/sd_demo.rst +++ b/docs/sd_demo.rst @@ -2,9 +2,9 @@ Stable Diffusion Demo ####################### -Ryzen AI 1.5 provides preview demos of Stable Diffusion image-generation pipelines. The demos cover Image-to-Image and Text-to-Image using SD 1.5, SD 2.1-base, SD-Turbo, SDXL-Turbo and SD 3.0. +Ryzen AI 1.6 provides preview demos of Stable Diffusion image-generation pipelines. The demos cover Image-to-Image and Text-to-Image using SD 1.5, SD 2.1-base, SD 2.1, SDXL-base-1.0, SD-Turbo, SDXL-Turbo, SD 3.0 and SD3.5. -The models for SD 1.5, SD 2.1-base, SD-Turbo, SDXL-Turbo are available for public download. The SD 3.0 models are only available to confirmed Stability AI licensees. +The models for SD 1.5, SD 2.1-base, SD 2.1, SDXL-base-1.0, SD-Turbo, SDXL-Turbo are available for public download. The SD3.0 / SD3.5 models are only available to confirmed Stability AI licensees. NOTE: Preview features are features which are still undergoing some optimization and fine-tuning. These features are not in their final form and may change as we continue to work in order to mature them into full-fledged features. @@ -19,29 +19,32 @@ Installation Steps .. code-block:: - xcopy /I /E "C:\Program Files\RyzenAI\1.5.0\GenAI-SD" C:\Temp\GenAI-SD + xcopy /I /E "C:\Program Files\RyzenAI\1.6.0\GenAI-SD" C:\Temp\GenAI-SD cd C:\Temp\GenAI-SD -3. Create a Conda environment for the Stable Diffusion demo packages: +3. Activate the Conda environment for the Stable Diffusion demo packages: .. code-block:: - conda update -n base -c defaults conda - conda env create --file=env.yaml + conda activate ryzen-ai-1.6.0 + conda env update -f rai_env_update.yaml 4. Download the Stable Diffusion models: - - :download:`GenAI-SD-models-v0613-v0711.zip ` - - :download:`GenAI-SDXL-turbo-models-v0613-v0711.zip ` + - :download:`GenAI-SD-models-v0927.zip ` + - :download:`GenAI-SDXL-models-v0927.zip ` 5. Extract the downloaded zip files and copy the models in the ``GenAI-SD\models`` folder. After installing all the models, the ``GenAI-SD\models`` folder should contain the following subfolders: + - sd15 - sd15_controlnet - - sd_15 - sd_21_base + - sd-2.1-v - sd_turbo + - sd_turbo_bs1 - sdxl_turbo - + - sdxl_turbo_bs1 + - sdxl-base-1.0 ****************** Running the Demos @@ -49,7 +52,7 @@ Running the Demos Activate the conda environment:: - conda activate ryzenai-stable-diffusion + conda activate ryzen-ai-1.6.0 Optionally, set the NPU to high performance mode to maximize performance:: @@ -67,7 +70,7 @@ To run the demo, navigate to the ``GenAI-SD\test`` directory and run the followi .. code-block:: - python .\run_sd15_controlnet.py + python .\run_sd15_controlnet.py --model_id 'stable-diffusion-v1-5' --model_path ..\models\sd15_controlnet\ The demo script uses a predefined prompt and ``ref\control.png`` as the control image. The output image and control image are saved in the ``generated_images`` folder. @@ -79,17 +82,21 @@ The control image can be modified and custom prompts can be provided with the `` Text-to-Image ============= -The text-to-image generates images based on text prompts. This demo supports SD 1.5 (512x512), SD 2.1-base (768x768), SD-Turbo (512x512) and SDXL-Turbo (512x512). +The text-to-image generates images based on text prompts. This demo supports SD 1.5 (512x512), SD 2.1-base (512x512), SD 2.1 (768x768), SDXL-base (1024x1024), SD-Turbo (512x512) and SDXL-Turbo (512x512). To run the demo, navigate to the ``GenAI-SD\test`` directory and run the following commands to run with each of the supported models: .. code-block:: - python run_sd.py --model_id 'stable-diffusion-v1-5/stable-diffusion-v1-5' --model_path ..\models\sd_15 + python run_sd.py --model_id 'stable-diffusion-v1-5/stable-diffusion-v1-5' --model_path ..\models\sd15\ python run_sd.py --model_id 'stabilityai/stable-diffusion-2-1-base' --model_path ..\models\sd_21_base + python run_sd.py --model_id 'stabilityai/stable-diffusion-2-1' --model_path ..\models\sd-2.1-v\ python run_sd.py --model_id 'stabilityai/sd-turbo' --model_path ..\models\sd_turbo + python run_sd.py --model_id 'stabilityai/sd-turbo' --model_path ..\models\sd_turbo_bs1 --num_images_per_prompt 1 python run_sd_xl.py --model_id 'stabilityai/sdxl-turbo' --model_path ..\models\sdxl_turbo - + python run_sd_xl.py --model_id 'stabilityai/sdxl-turbo' --model_path ..\models\sdxl_turbo_bs1 --num_images_per_prompt 1 + python run_sd_xl.py --model_id 'stabilityai/stable-diffusion-xl-base-1.0' --model_path ..\models\sdxl-base-1.0\ + The demo script uses a predefined prompt for each of the models. The output images are saved in the ``generated_images`` folder. @@ -115,7 +122,7 @@ Custom prompts can be provided with the ``--prompt`` option. For instance:: .. .. code-block:: -.. conda activate ryzen-ai-1.5.0 +.. conda activate ryzen-ai-1.6.0 .. 3. Copy the GenAI-SD folder from the RyzenAI installation tree to your working area, and then go to the copied folder. For instance: @@ -129,6 +136,4 @@ Custom prompts can be provided with the ``--prompt`` option. For instance:: .. .. code-block:: .. conda env update -f rai_env_update.yaml -.. pip install "%RYZEN_AI_INSTALLATION_PATH%\atom-1.0-cp310-cp310-win_amd64.whl" -.. pip install opencv-python==4.11.0.86 -.. pip install accelerate==0.32.0 +.. pip install "%RYZEN_AI_INSTALLATION_PATH%\atom-1.0-cp312-cp312-win_amd64.whl"