Skip to content

Conversation

@MrEdwards007
Copy link

@MrEdwards007 MrEdwards007 commented Jan 1, 2026

This commit introduces the core optimized scripts and documentation for the TurboDiffusion framework:

  • i2v.py: Optimized Image-to-Video inference featuring a Tiered Failover System and tiled encoding/decoding for 720p stability.
  • t2v.py: Text-to-Video inference with intelligent hardware detection (TF32/BF16) and a 3-tier memory recovery system.
  • cache_t5.py: A memory utility to pre-compute T5 embeddings, allowing 14B models to run without loading the 11GB text encoder into VRAM.
    TurboDiffusionStudio: (turbo_diffusion_t5_cache_optimize_v6.py) A unified Gradio-based Web interface for both Text-to-Video and Image-to-Video generation
  • TurboDiffusion_Studio.md: Comprehensive documentation for the unified Gradio Web UI, detailing VRAM monitoring, frame sanitization, and auto-caching features.

All scripts include optimizations co-developed by Waverly Edwards and Google Gemini, and received support (cache_t5.py) by Johhn D. Pope, to maximize performance on consumer GPUs.

I was able to run this on a 5090, maxing out at 117 frames in 136 seconds (I2V) and 160 seconds (T2V), 16:9 aspect ratio using sagesla attention. Technically the T2V can be done in 126 seconds, using sla attention but I cant explain why sagesla seems to always outperform sla, except in this instance.

To reproduce without the gradio interface, utilize the committed files and this attached metadata, which can be used as a script.

t2v_20260101_085350_metadata.txt
i2v_20260101_063848_metadata.txt
t2v_20251226_233021
(This is the original image used for I2V)

There are three anomalies.

  1. Using the same seed does not always produce the same image. If I use a consistent seed, requesting 33 frames, the image may be different at 113 frames.
  2. The I2V workflow has a subtle artifact that comes from the tiling and stitching them back together that I have NOT been able to eliminate. There is a tiny discontinuity/offset of the frames. I tried to perfectly align and smooth them out but it is still there.
  3. When the video begins, it flashes or starts out brighter and then settles to the normal exposure.
image (This is a frame of the video using the original image. Note the area [inside] of the red rectangle . When you look closely, there is a small gap and shift of a few pixels, that I've been unable to eliminate)

This is an optimized inference script for generating videos from a single input image and a text prompt. It features a Tiered Failover System to prevent "Out of Memory" (OOM) errors on consumer GPUs by automatically switching between high and low-noise models and utilizing tiled encoding/decoding. It is designed to maximize performance on hardware like the RTX 5090 while maintaining stability for high-resolution (720p) outputs.
This script handles video generation directly from text prompts. It includes Intelligent Hardware Detection to automatically select the best precision (BF16/FP16) for your GPU and a three-tier memory recovery system (GPU → Checkpointing → CPU Offloading). It also enforces safety checks, such as disabling torch.compile for quantized models to prevent system crashes during the diffusion process.
A utility script designed by John D. Pope to pre-compute and save text embeddings to a file. By "caching" these embeddings, you can run the main inference scripts without loading the heavy 11GB T5 Text Encoder into VRAM. This is essential for running the large 14B models on GPUs with limited memory, effectively "skipping" the most memory-intensive part of the initialization.
A Gradio-based dashboard that centralizes Text-to-Video and Image-to-Video workflows. It automates environment setup, provides real-time VRAM monitoring, and enforces the 4n+1 frame rule for VAE stability. It also supports automatic T5 caching and saves reproduction metadata for every video generated.
Readme for TurboDiffusion Studio , a unified, high-performance Gradio interface for Wan2.1 and Wan2.2 video generation that automates memory management through real-time VRAM monitoring and integrated T5 embedding caching.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant