Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 30, 2026

Analyzed and ported 3 upstream improvements from fspecii's fork. All CUDA-specific changes are isolated to unreachable code paths on MPS systems.

Changes

VRAM Detection (CUDA only)

  • Use free VRAM instead of total for auto-config decisions
  • Accounts for memory already consumed by desktop, browser, etc.
  • Benefits 8GB GPU users with more accurate threshold selection
# Before: Uses total VRAM
vram_gb = props.total_memory / (1024 ** 3)

# After: Uses free VRAM for decisions
free_vram_bytes = torch.cuda.mem_get_info()[0]
free_vram_gb = free_vram_bytes / (1024 ** 3)

VRAM Diagnostics (CUDA only)

  • Log memory usage after KV cache setup for debugging
  • Double-gated: torch.cuda.is_available() and pipeline.mula_device.type == 'cuda'

Lyrics Safety Fix (all platforms)

  • Prevent None values reaching heartlib: request.lyrics or ""

MPS Safety

MPS code path returns early (line 178) before any CUDA code executes (line 188+). CUDA changes are physically unreachable on Apple Silicon systems without CUDA.

Original prompt

This section details on the original issue you should resolve

<issue_title>Upstream Changes</issue_title>
<issue_description>The original CUDA/Win version has some patches. Let's analyze and determine if we need to port them into our fork:

main...fspecii:HeartMuLa-Studio:main

Lets make sure the changes are NOT causing issue and breaking our code, since we applied so many changes for MPS support!
Do not take risks, we'd rather not merge the changes and keep our app stable, possibly reimplementing them for our architecture.</issue_description>

Comments on the Issue (you are @copilot in this section)


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits January 30, 2026 08:58
Co-authored-by: lmangani <1423657+lmangani@users.noreply.github.com>
Co-authored-by: lmangani <1423657+lmangani@users.noreply.github.com>
Copilot AI changed the title [WIP] Analyze and assess upstream changes for stability Port upstream VRAM detection and lyrics safety fixes with MPS isolation Jan 30, 2026
Copilot AI requested a review from lmangani January 30, 2026 09:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Upstream Changes

2 participants