Skip to content

Conversation

@lyfne123
Copy link
Contributor

Implement complete MLIR code generator that converts PyPTO IR to PTO-ISA dialect.

Key features:

  • Structured code generation with ordered output (constants, tensor views, allocations, operations)
  • Automatic lowering of block operations to PTO instructions (tload, tstore, tmul, tadds, taddc)
  • Implicit subview generation from block.load/store operations
  • MemRef-based allocation tracking and tile buffer management
  • Support for TensorType parameters with automatic make_tensor_view generation
  • Type-aware conversions for DataType and MemorySpace to MLIR equivalents

Implementation:

  • Add PTOCodegen class with Generate() method
  • Implement PTOMLIRCodegen visitor for IR traversal
  • Add MemRefCollectorVisitor for allocation discovery
  • Generate proper SSA form with variable name mappings
  • Handle constant deduplication and ordering

Operator mappings:

  • block.load → pto.subview + pto.tload
  • block.store → pto.subview + pto.tstore
  • block.mul → pto.tmul
  • block.add → pto.taddc
  • block.adds → pto.tadds

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lyfne123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the PyPTO compiler's backend by replacing the custom PTO assembly generator with a new implementation that produces PTO-ISA MLIR dialect. This change modernizes the intermediate representation, enabling more robust type handling, structured code generation, and better integration with MLIR's extensive optimization framework. The refactoring improves the compiler's foundation for future enhancements and broader hardware support.

Highlights

  • New MLIR Code Generation: The PTOCodegen has been completely reimplemented to generate PTO-ISA MLIR dialect instead of the previous PTO assembly format, providing a more structured and extensible intermediate representation.
  • Automatic Lowering of Block Operations: PyPTO IR block operations are now automatically lowered to their corresponding PTO-ISA MLIR instructions, including block.load to pto.subview + pto.tload, block.store to pto.subview + pto.tstore, block.mul to pto.tmul, block.add to pto.taddc, and block.adds to pto.tadds.
  • Structured MLIR Output: The generated MLIR code follows a fixed order: constants, tensor views (pto.make_tensor_view), tile buffer allocations (pto.alloc_tile), and then the main function operations, ensuring predictable and readable output.
  • Enhanced Type and Memory Handling: The codegen now includes type-aware conversions for DataType and MemorySpace to MLIR equivalents, and supports MemRef-based allocation tracking for tile buffers.
  • Updated Documentation and Testing: New comprehensive documentation (docs/dev/11-pto_codegen.md) detailing the MLIR codegen's features, architecture, and usage has been added, alongside new unit tests (examples/codegen/test_pto_codegen.py) specifically for MLIR generation.
  • Removal of Old PTO Assembly Logic: Legacy PTO assembly generation code, examples, and reference files have been removed, streamlining the codebase to focus on the new MLIR-based approach.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR is a major and valuable refactoring of the codegen component, moving from a custom PTO assembly format to the more standard and extensible MLIR PTO-ISA dialect. The implementation is well-structured, using a visitor pattern for IR traversal and separating concerns like allocation collection and code generation. The new documentation and tests are comprehensive and align well with the new functionality.

I've identified a few areas for improvement, primarily concerning hardcoded attributes in the MLIR generation which limit its flexibility, and a critical inconsistency between the Python frontend and C++ backend for the block.add operation. Addressing these will make the new codegen more robust and generally applicable.

// block.add(a, b, c) -> a + b + c
auto a = As<ir::Var>(op->args_[0]);
auto b = As<ir::Var>(op->args_[1]);
auto c = As<ir::Var>(op->args_[2]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This line accesses op->args_[2], assuming block.add is always a 3-operand operation. However, the Python frontend in pypto/language/op/block_ops.py defines add as a 2-operand operation. This discrepancy will lead to an out-of-bounds access and a crash at runtime. The implementation should be made robust to handle the arity defined in the Python frontend, or the frontend should be updated to match the backend's expectation.

Comment on lines +620 to +626
stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, ";
stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>, " << rhs_buf;
stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, ";
stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>) outs(";
stream_ << result_buf;
stream_ << " : !pto.tile_buf<loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, ";
stream_ << "blayout=row_major, slayout=none_box, fractal=512, pad=0>)\n";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The attributes for !pto.tile_buf (like dtype, rows, cols) are hardcoded here and in other Emit* functions. While the PR description mentions this as a limitation, it severely restricts the codegen's utility. The implementation should be updated to infer these attributes from the TileType of the IR variables involved in the operation. This would make the codegen more general and robust.

| `block.load(tensor, row, col, h, w)` | `pto.subview` + `pto.tload` |
| `block.store(tile, row, col, h, w, tensor)` | `pto.subview` + `pto.tstore` |
| `block.mul(lhs, rhs)` | `pto.tmul` |
| `block.add(a, b, c)` | `pto.taddc` (3-operand add) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an inconsistency in the block.add operation's definition. The documentation here describes it as a 3-operand add (block.add(a, b, c)), and the C++ implementation EmitBlockAdd also expects three arguments. However, the Python frontend wrapper in pypto/language/op/block_ops.py defines add as a 2-operand operation. This discrepancy should be resolved. The documentation should reflect the currently available Python API.

Comment on lines +253 to +255
%0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
%1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
%2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example MLIR output for pto.alloc_tile is truncated with .... To make the example more illustrative and complete, consider showing the full attributes, similar to the example in the "Allocation Generation" section. This would provide a clearer, copy-paste-able example for users.

Suggested change
%0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
%1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
%2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, ...>
%0 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0>
%1 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0>
%2 = pto.alloc_tile : <loc=ub, dtype=f32, rows=32, cols=32, v_row=32, v_col=32, blayout=row_major, slayout=none_box, fractal=512, pad=0>

Comment on lines 740 to 745
"Code generator that transforms PyPTO IR to PTO assembly (.pto files). "
"Generates PTO ISA instructions in SSA form with tile operations, control flow, and type annotations.")
.def(nb::init<>(), "Create a new PTO assembly code generator")
.def("generate", &PTOCodegen::Generate, nb::arg("program"),
.def("generate", &codegen::PTOCodegen::Generate, nb::arg("program"),
"Generate PTO assembly from PyPTO IR Program. Returns PTO assembly code string (.pto format) with "
"instructions like tmul, tadd, FOR/ENDFOR, etc.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstrings for PTOCodegen and its generate method are outdated. They still refer to generating "PTO assembly" and old instructions like FOR/ENDFOR. These should be updated to reflect that the new implementation generates MLIR in the PTO-ISA dialect.

      "Code generator that transforms PyPTO IR to PTO-ISA MLIR dialect. "
      "Generates MLIR with PTO-ISA operations like pto.tload, pto.tmul, etc.")
      .def(nb::init<>(), "Create a new PTO MLIR code generator")
      .def("generate", &codegen::PTOCodegen::Generate, nb::arg("program"),
           "Generate PTO-ISA MLIR from a PyPTO IR Program. "
           "Returns the MLIR code as a string.");

Implement complete MLIR code generator that converts PyPTO IR to PTO-ISA dialect.

Key features:
- Structured code generation with ordered output (constants, tensor views, allocations, operations)
- Automatic lowering of block operations to PTO instructions (tload, tstore, tmul, tadds, taddc)
- Implicit subview generation from block.load/store operations
- MemRef-based allocation tracking and tile buffer management
- Support for TensorType parameters with automatic make_tensor_view generation
- Type-aware conversions for DataType and MemorySpace to MLIR equivalents

Implementation:
- Add PTOCodegen class with Generate() method
- Implement PTOMLIRCodegen visitor for IR traversal
- Add MemRefCollectorVisitor for allocation discovery
- Generate proper SSA form with variable name mappings
- Handle constant deduplication and ordering

Operator mappings:
- block.load → pto.subview + pto.tload
- block.store → pto.subview + pto.tstore
- block.mul → pto.tmul
- block.add → pto.taddc
- block.adds → pto.tadds
@lyfne123 lyfne123 merged commit 4783158 into hw-native-sys:main Jan 31, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant