Skip to content

Expr propagation bug.#81

Open
utku-work wants to merge 1 commit intojoeyye-work:for-serving-2.20from
utku-work:fix_expression_propagation
Open

Expr propagation bug.#81
utku-work wants to merge 1 commit intojoeyye-work:for-serving-2.20from
utku-work:fix_expression_propagation

Conversation

@utku-work
Copy link
Copy Markdown
Collaborator

No description provided.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses incorrect propagation/handling of per-dimension shape expressions by ensuring expression vectors don’t outlive the current rank and by limiting when output-dimension canonicalization runs during Grappler shape inference.

Changes:

  • Gate CanonicalizeOutputDims() in Grappler shape inference behind TensorShapeExpressionsEnabled().
  • Clamp TensorShapeRep expression accessors/mutators so expressions_.size() cannot exceed the shape rank.
  • Add bounds checks in TensorShapeRep::set_expression() and truncate inputs in set_expressions().

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
tensorflow/core/grappler/costs/graph_properties.cc Only canonicalize output dims when TensorShape expressions are enabled.
tensorflow/core/framework/tensor_shape.h Ensure get_expressions() does not return more entries than the shape rank.
tensorflow/core/framework/tensor_shape.cc Enforce expression index/rank bounds and truncate expression vectors to rank.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 494 to +523
void TensorShapeRep::set_expression(int d, xla::DynExpr* expr) {
if (!kTensorShapeExpressionsEnabled) {
expressions_.clear();
return;
}
if (expressions_.size() <= static_cast<size_t>(d)) {
expressions_.resize(d + 1, nullptr);
CHECK_GE(d, 0);
CHECK_LT(d, ndims_byte());
const size_t new_size = static_cast<size_t>(d) + 1;
if (expressions_.size() < new_size) {
expressions_.resize(new_size, nullptr);
}
expressions_[d] = expr;
}

void TensorShapeRep::AddExpression(xla::DynExpr* expr) {
if (!kTensorShapeExpressionsEnabled) {
return;
}
CHECK_LT(expressions_.size(), ndims_byte());
expressions_.push_back(expr);
}

void TensorShapeRep::set_expressions(std::vector<xla::DynExpr*> exprs) {
if (!kTensorShapeExpressionsEnabled) {
expressions_.clear();
return;
}
if (exprs.size() > ndims_byte()) {
exprs.resize(ndims_byte());
}
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes clamp expressions to ndims_byte(), which should prevent producing/propagating TensorShapeProto instances where expressions.size() > dim_size(). There are existing unit tests for TensorShape behavior, but none appear to cover expression truncation/bounds; please add a test case that exercises setting expressions beyond the current rank and verifies serialization (AsProto) and getters (get_expressions) stay consistent with dims() when dynamic sizes are enabled.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Irrelevant.

@utku-work utku-work force-pushed the fix_expression_propagation branch from f2b1c4c to 9eb0d14 Compare March 26, 2026 16:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants