-
Notifications
You must be signed in to change notification settings - Fork 15
feat(codegen): implement PTOCodegen for PTO-ISA MLIR generation #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @lyfne123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the PyPTO compiler's backend by replacing the custom PTO assembly generator with a new implementation that produces PTO-ISA MLIR dialect. This change modernizes the intermediate representation, enabling more robust type handling, structured code generation, and better integration with MLIR's extensive optimization framework. The refactoring improves the compiler's foundation for future enhancements and broader hardware support. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This PR is a major and valuable refactoring of the codegen component, moving from a custom PTO assembly format to the more standard and extensible MLIR PTO-ISA dialect. The implementation is well-structured, using a visitor pattern for IR traversal and separating concerns like allocation collection and code generation. The new documentation and tests are comprehensive and align well with the new functionality.
I've identified a few areas for improvement, primarily concerning hardcoded attributes in the MLIR generation which limit its flexibility, and a critical inconsistency between the Python frontend and C++ backend for the block.add operation. Addressing these will make the new codegen more robust and generally applicable.
| // block.add(a, b, c) -> a + b + c | ||
| auto a = As<ir::Var>(op->args_[0]); | ||
| auto b = As<ir::Var>(op->args_[1]); | ||
| auto c = As<ir::Var>(op->args_[2]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line accesses op->args_[2], assuming block.add is always a 3-operand operation. However, the Python frontend in pypto/language/op/block_ops.py defines add as a 2-operand operation. This discrepancy will lead to an out-of-bounds access and a crash at runtime. The implementation should be made robust to handle the arity defined in the Python frontend, or the frontend should be updated to match the backend's expectation.
| stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, "; | ||
| stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>, " << rhs_buf; | ||
| stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, "; | ||
| stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>) outs("; | ||
| stream_ << result_buf; | ||
| stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, "; | ||
| stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>)\n"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The attributes for !pto.tile_buf (like dtype, rows, cols) are hardcoded here and in other Emit* functions. While the PR description mentions this as a limitation, it severely restricts the codegen's utility. The implementation should be updated to infer these attributes from the TileType of the IR variables involved in the operation. This would make the codegen more general and robust.
| | `block.load(tensor, row, col, h, w)` | `pto.subview` + `pto.tload` | | ||
| | `block.store(tile, row, col, h, w, tensor)` | `pto.subview` + `pto.tstore` | | ||
| | `block.mul(lhs, rhs)` | `pto.tmul` | | ||
| | `block.add(a, b, c)` | `pto.taddc` (3-operand add) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an inconsistency in the block.add operation's definition. The documentation here describes it as a 3-operand add (block.add(a, b, c)), and the C++ implementation EmitBlockAdd also expects three arguments. However, the Python frontend wrapper in pypto/language/op/block_ops.py defines add as a 2-operand operation. This discrepancy should be resolved. The documentation should reflect the currently available Python API.
| %0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> | ||
| %1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> | ||
| %2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example MLIR output for pto.alloc_tile is truncated with .... To make the example more illustrative and complete, consider showing the full attributes, similar to the example in the "Allocation Generation" section. This would provide a clearer, copy-paste-able example for users.
| %0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> | |
| %1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> | |
| %2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...> | |
| %0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0> | |
| %1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0> | |
| %2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0> |
| "Code generator that transforms PyPTO IR to PTO assembly (.pto files). " | ||
| "Generates PTO ISA instructions in SSA form with tile operations, control flow, and type annotations.") | ||
| .def(nb::init<>(), "Create a new PTO assembly code generator") | ||
| .def("generate", &PTOCodegen::Generate, nb::arg("program"), | ||
| .def("generate", &codegen::PTOCodegen::Generate, nb::arg("program"), | ||
| "Generate PTO assembly from PyPTO IR Program. Returns PTO assembly code string (.pto format) with " | ||
| "instructions like tmul, tadd, FOR/ENDFOR, etc."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstrings for PTOCodegen and its generate method are outdated. They still refer to generating "PTO assembly" and old instructions like FOR/ENDFOR. These should be updated to reflect that the new implementation generates MLIR in the PTO-ISA dialect.
"Code generator that transforms PyPTO IR to PTO-ISA MLIR dialect. "
"Generates MLIR with PTO-ISA operations like pto.tload, pto.tmul, etc.")
.def(nb::init<>(), "Create a new PTO MLIR code generator")
.def("generate", &codegen::PTOCodegen::Generate, nb::arg("program"),
"Generate PTO-ISA MLIR from a PyPTO IR Program. "
"Returns the MLIR code as a string.");Implement complete MLIR code generator that converts PyPTO IR to PTO-ISA dialect. Key features: - Structured code generation with ordered output (constants, tensor views, allocations, operations) - Automatic lowering of block operations to PTO instructions (tload, tstore, tmul, tadds, taddc) - Implicit subview generation from block.load/store operations - MemRef-based allocation tracking and tile buffer management - Support for TensorType parameters with automatic make_tensor_view generation - Type-aware conversions for DataType and MemorySpace to MLIR equivalents Implementation: - Add PTOCodegen class with Generate() method - Implement PTOMLIRCodegen visitor for IR traversal - Add MemRefCollectorVisitor for allocation discovery - Generate proper SSA form with variable name mappings - Handle constant deduplication and ordering Operator mappings: - block.load → pto.subview + pto.tload - block.store → pto.subview + pto.tstore - block.mul → pto.tmul - block.add → pto.taddc - block.adds → pto.tadds
Implement complete MLIR code generator that converts PyPTO IR to PTO-ISA dialect.
Key features:
Implementation:
Operator mappings: