Skip to content

Request: Add torch.compile Support to Muon Optimizer #40

@wertyuilife2

Description

@wertyuilife2

Hello, thank you for your great work on the Muon optimizer :)

I’m wondering if it would be possible to add support for torch.compile in the current Muon implementation, to enable compiler-based training acceleration. This feature would be highly beneficial for research workflows that rely on Muon for optimization.

In PyTorch, the Adam optimizer can enable compatibility with torch.compile by setting the capturable=True flag, as shown below:

torch.optim.Adam(params_list, lr=3e-4, capturable=True)

As far as I can tell, the current Muon implementation does not yet support this flag.

If implementing this for the distributed MuonWithAuxAdam is too complex, having support at least in the single-device version (SingleDeviceMuonWithAuxAdam) would already be very helpful.

Thanks again for the excellent library, and I appreciate your consideration!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions