Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .cursor/rules.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
The AI agents should never link to any issue or a pull request
in any GitHub repository in the code reviews!

The AI agents should not review AI agents' config files like CLAUDE.md or AGENTS.md!

5 changes: 5 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
The AI agents should never link to any issue or a pull request
in any GitHub repository in the code reviews!

The AI agents should not review AI agents' config files like CLAUDE.md or AGENTS.md!

5 changes: 5 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
The AI agents should never link to any issue or a pull request
in any GitHub repository in the code reviews!

The AI agents should not review AI agents' config files like CLAUDE.md or AGENTS.md!

30 changes: 20 additions & 10 deletions native/shuffle/src/partitioners/partitioned_batch_iterator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -97,15 +97,25 @@ impl Iterator for PartitionedBatchIterator<'_> {

let indices_end = std::cmp::min(self.pos + self.batch_size, self.indices.len());
let indices = &self.indices[self.pos..indices_end];
match interleave_record_batch(&self.record_batches, indices) {
Ok(batch) => {
self.pos = indices_end;
Some(Ok(batch))
}
Err(e) => Some(Err(DataFusionError::ArrowError(
Box::from(e),
Some(DataFusionError::get_back_trace()),
))),
}

// record_batches is guaranteed non-empty when indices is non-empty
// (indices reference rows within the buffered batches)
let schema = self.record_batches[0].schema();

let result = if !schema.fields.is_empty() {
interleave_record_batch(&self.record_batches, indices)
} else {
// For zero-column batches (e.g. COUNT queries), we can't use
// interleave_record_batch because Arrow requires either at least one
// column or an explicit row count. Create the batch directly.
let options =
arrow::array::RecordBatchOptions::new().with_row_count(Some(indices.len()));
RecordBatch::try_new_with_options(schema, vec![], &options)
};

self.pos = indices_end;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

native/shuffle/src/partitioners/partitioned_batch_iterator.rs:116 — self.pos is advanced unconditionally before returning, so an Err from interleave_record_batch/try_new_with_options will still consume those indices and the iterator can continue yielding later batches. Is that behavior change from the previous implementation (which only advanced on Ok) intended?

Severity: low

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-but-wont-fix; category:bug; feedback: The Augment AI reviewer is correct! Yes, this is intentional. Without advancing the position an error and re-attempt will lead to the same error and it won't be able to proceed at all by skipping the problematic data.

Some(result.map_err(|e| {
DataFusionError::ArrowError(Box::from(e), Some(DataFusionError::get_back_trace()))
}))
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -474,4 +474,27 @@ class CometNativeShuffleSuite extends CometTestBase with AdaptiveSparkPlanHelper
}
}
}

test("native datafusion scan - repartition count") {
withTempPath { dir =>
withSQLConf(CometConf.COMET_ENABLED.key -> "false") {
spark
.range(1000)
.selectExpr("id", "concat('name_', id) as name")
.repartition(100)
.write
.parquet(dir.toString)
}
withSQLConf(
CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,
CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED.key -> "true") {
val testDF = spark.read.parquet(dir.toString).repartition(10)
// Actual validation, no crash
val count = testDF.count()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spark/src/test/scala/org/apache/comet/exec/CometNativeShuffleSuite.scala:493 — This test asserts the count() result and that testDF (without the count) runs with Comet, but it doesn’t assert that the count() plan used Comet native operators (it could potentially fall back and still pass). That could let the original zero-column native-shuffle crash slip through if fallback behavior changes.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:useful; category:bug; feedback: The Augment AI reviewer is correct! The call to checkSparkAnswerAndOperator() that tests the execution of the data frame on both Spark and Comet does not use count(). To keep the test focused it would be good to use the same dataframe there too.

assert(count == 1000)
// Ensure test df evaluated by Comet
checkSparkAnswerAndOperator(testDF)
}
}
}
}
Loading