Skip to content

Conversation

@rolfmorel
Copy link
Contributor

@rolfmorel rolfmorel commented Dec 9, 2025

Also loosens dependency upon uv for running tests -- all that is required is that lit is running in a suitable environment (e.g. through uv run lit . or by first entering an venv and then lit .).

Also splits lighthouse.utils into two so that the examples/tests that don't actually depend on torch accidently end up with that dependency (there was one such such instance already).

@rolfmorel
Copy link
Contributor Author

rolfmorel commented Dec 9, 2025

E.g. if you didn't install torch dependencies (e.g. did uv sync without --extra ingress-torch-cpu flag), then we get:

$ lit .
-- Testing: 8 tests, 8 workers --
UNSUPPORTED: Lighthouse test suite :: examples/ingress/torch/mlp_from_model.py (1 of 8)
UNSUPPORTED: Lighthouse test suite :: examples/ingress/torch/mlp_from_file.py (2 of 8)
UNSUPPORTED: Lighthouse test suite :: examples/xegpu_matmul/matmul.py (3 of 8)
UNSUPPORTED: Lighthouse test suite :: examples/mlir/compile_and_run.py (4 of 8)
UNSUPPORTED: Lighthouse test suite :: examples/ingress/convert-kernel-bench-to-mlir.py (5 of 8)
UNSUPPORTED: Lighthouse test suite :: examples/llama/test_llama3.py (6 of 8)
PASS: Lighthouse test suite :: examples/schedule/transform_a_payload_according_to_a_schedule.py (7 of 8)
PASS: Lighthouse test suite :: examples/ingress/mlir_gen/generate-linalg-3layer-mlp.sh (8 of 8)

And if you then install torch but still forgot to/didn't fetch the KernelBench third_party repo:

$ lit .
-- Testing: 8 tests, 8 workers --
UNSUPPORTED: Lighthouse test suite :: examples/ingress/convert-kernel-bench-to-mlir.py (1 of 8)
PASS: Lighthouse test suite :: examples/schedule/transform_a_payload_according_to_a_schedule.py (2 of 8)
PASS: Lighthouse test suite :: examples/ingress/mlir_gen/generate-linalg-3layer-mlp.sh (3 of 8)
PASS: Lighthouse test suite :: examples/mlir/compile_and_run.py (4 of 8)
PASS: Lighthouse test suite :: examples/llama/test_llama3.py (5 of 8)
PASS: Lighthouse test suite :: examples/xegpu_matmul/matmul.py (6 of 8)
PASS: Lighthouse test suite :: examples/ingress/torch/mlp_from_file.py (7 of 8)
PASS: Lighthouse test suite :: examples/ingress/torch/mlp_from_model.py (8 of 8)

Without this change the above UNSUPPORTED tests would show FAILED instead.

Copy link
Contributor

@tkarna tkarna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. At the moment we are only testing the "everything is installed" case in CI, right?

@@ -1,4 +1,5 @@
# RUN: %PYTHON %s --dump-kernel=xegpu-wg | FileCheck %s
# RUN: python %s --dump-kernel=xegpu-wg | FileCheck %s
# REQUIRES: torch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment torch is indeed required because lighthouse/utils/runtime_args.py requires it. This test does not actually use torch though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have now split out the torch dependent code to its own module. This file no longer unnecessarily REQUIRES torch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adam-smnk could you have a look at the split in lighthouse.utils? I don't mind doing it differently, e.g. shallower namespacing.

Copy link
Contributor

@adam-smnk adam-smnk Dec 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split itself looks good 👍

Naming-wise - I don't think these functions fully fit runtime category. I'd stick with more shallow utils or tools.

Also, having our own torch.py module shadowing large library feels a bit wrong. How about torch_utils or another suffix like torch_ffi? Maybe ffi dir if you prefer more structure.
Just to help with differentiation and to avoid the need for extra aliasing during importing.

Copy link
Contributor Author

@rolfmorel rolfmorel Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed it in e13be22.

Common way of using it should now look like lh_utils.torch.dtype_from_mlir_type and lh_utils.memref.to_packed_args etc. (Which took some time to get right ... though now we get to avoid a torch_utils module in lighthouse.utils/ lighthouse.tools 🥳 ).

@rengolin
Copy link
Member

rengolin commented Dec 9, 2025

why is the kernelbench still unsupported after installing torch?

@rolfmorel
Copy link
Contributor Author

why is the kernelbench still unsupported after installing torch?

The example speaks for itself 😉

$ uv run examples/ingress/convert-kernel-bench-to-mlir.py
ERROR: KernelBench repo not found.
NOTE: Pull in dependency with: git submodule update --init third_party/KernelBench

@rolfmorel rolfmorel force-pushed the users/rolfmorel/lit-cleanup branch from e5dd205 to efff1b1 Compare December 9, 2025 18:53
@rolfmorel
Copy link
Contributor Author

At the moment we are only testing the "everything is installed" case in CI, right?

Indeed. If people want this UNSUPPORTED mechanism tested in CI as well, do let me know.

@rolfmorel rolfmorel force-pushed the users/rolfmorel/lit-cleanup branch 2 times, most recently from b58358f to 84fd5fb Compare December 15, 2025 21:33
Also loosens dependency upon `uv` for running tests -- all that is
required is that `lit` is running in a suitable environemnt (e.g.
through `uv run lit .` or by first entering an venv and then `lit .`).
Means at least one more example can run without torch being installed.
1# with '#' will be ignored, and an empty message aborts the commit.
@rolfmorel rolfmorel force-pushed the users/rolfmorel/lit-cleanup branch from 84fd5fb to 64bf0ab Compare December 15, 2025 21:41
@rolfmorel rolfmorel force-pushed the users/rolfmorel/lit-cleanup branch from 64bf0ab to e13be22 Compare December 15, 2025 21:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants