Skip to content

Windows RTX 5090 local generation crashes with access violation (0xC0000005) during model component load #50

@Ishaanor

Description

@Ishaanor

Summary

Local generation on Windows crashes the backend process with an access violation during model component loading on an RTX 5090.

The crash is not a handled Python exception. The backend exits with Windows code 3221225477 (0xC0000005).

Environment

  • OS: Windows 11
  • GPU: NVIDIA GeForce RTX 5090
  • NVIDIA driver: 591.74
  • nvidia-smi CUDA version: 13.1
  • App: LTX Desktop 1.0.1
  • Bundled Python: 3.13.12
  • Torch: 2.10.0+cu128
  • cuDNN: 91002
  • ltx-core: 1.0.0
  • ltx-pipelines: 1.0.0
  • sageattention: 1.0.6
  • triton-windows: 3.6.0.post25

What happens

Starting a local text-to-video generation causes the backend to crash and restart.

The failure happens even after disabling:

  • SageAttention
  • FP8 quantization

I also reproduced the failure outside the app process using the bundled Python/runtime, so this does not appear to be Electron/UI-specific.

Repro steps

  1. Launch LTX Desktop on Windows with the local backend.
  2. Use local generation with the downloaded fast model.
  3. Start a text-to-video generation, for example:
    • resolution: 540p
    • fps: 24
    • frames: 121
  4. Backend crashes mid-generation.

Observed logs

Session log excerpt:

[t2v] Generation started (model=fast, 960x544, 121 frames, 24 fps)
[t2v] Pipeline load: 0.00s
[t2v] Text encoding (local): 0.00s
[fast-native] Loading text encoder
...
Python backend exited with code 3221225477

In another run it progressed slightly further:

[fast-native] Loading text encoder
[fast-native] Loading video encoder and transformer
Python backend exited with code 3221225477

After forcing safer paths, startup confirmed:

SageAttention: disabled
FP8 quantization: disabled

but the crash still occurred.

Standalone repro outside the app
Using the bundled Python/runtime, I can reproduce the failure while loading model components directly.

This survives:

pipeline creation
text_encoder() load
and then crashes while loading other core model components such as:

video_encoder()
or, in later iterations, even earlier during local text-encoder load depending on patching path
Notes
This looks like a native runtime/kernel/access violation rather than a normal Python exception.

Given the stack, this may be related to:

Windows + Blackwell/RTX 5090 support maturity
PyTorch/cu128/driver interaction
LTX local runtime compatibility on this platform

Requested help

Please advise:
Whether RTX 5090 on Windows is currently supported for local LTX Desktop generation
Whether a different recommended runtime stack exists for Blackwell
Whether there is a known fix/workaround
Whether Linux/WSL2 is currently the expected path for local generation on RTX 5090

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions