Skip to content

Conversation

@k82cn
Copy link
Contributor

@k82cn k82cn commented Jan 23, 2026

No description provided.

Signed-off-by: Klaus Ma <klausm@nvidia.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @k82cn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily addresses code quality and consistency by resolving various Clippy linter warnings and standardizing error handling mechanisms across the codebase. It refines error type conversions, suppresses linter warnings in generated code, and improves Arrow IPC encoding for robustness, ensuring a cleaner and more maintainable codebase.

Highlights

  • Error Handling Standardization: Refactored error handling in the object_cache module to consistently use the custom FlameError type instead of tonic::Status, providing more granular error information within the application logic.
  • Clippy Warning Suppression: Introduced #[allow(clippy::...)] attributes in cri/build.rs, sdk/rust/build.rs, and session_manager/src/main.rs to suppress specific Clippy linter warnings, particularly for generated protobuf code and intentionally unawaited futures.
  • Improved Error Mapping: Expanded the conversion logic from FlameError to tonic::Status in common/src/lib.rs to map a wider range of FlameError variants to appropriate gRPC Status codes, enhancing error visibility for clients.
  • Arrow IPC Encoding Refinement: Updated Arrow IPC encoding and decoding methods in object_cache/src/cache.rs to directly utilize IpcDataGenerator and DictionaryTracker, removing reliance on arrow_flight::utils::flight_data_from_arrow_batch for more explicit control and potentially better compatibility.
  • Code Idiomaticity: Adjusted the PartialOrd implementation for Priority in stdng/src/collections/bin_heap.rs to use Ord::cmp, aligning with more idiomatic Rust practices for trait implementations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request primarily focuses on addressing clippy warnings and improving error handling consistency across the codebase. The FlameError to Status conversion has been enhanced to map more specific error types to appropriate gRPC status codes, which is a good improvement for API clarity. Additionally, several #[allow(clippy::...)] attributes have been added to generated code to suppress linter warnings, which is a common and acceptable practice. The most significant change involves refactoring the Arrow Flight data serialization in object_cache/src/cache.rs to use a more direct IpcDataGenerator approach.

Comment on lines +598 to +630
let mut flight_data_vec = Vec::new();

// If empty, manually encode the batch
if flight_data_vec.is_empty() {
tracing::warn!("flight_data_from_arrow_batch returned empty, using manual encoding");

// First, encode and send schema
let mut data_gen = IpcDataGenerator::default();
let encoded_schema = data_gen.schema_to_bytes(batch.schema().as_ref(), &options);

let schema_flight_data = FlightData {
flight_descriptor: None,
app_metadata: vec![].into(),
data_header: encoded_schema.ipc_message.into(),
data_body: vec![].into(),
};
flight_data_vec.push(schema_flight_data);
// Encode using IpcDataGenerator directly with DictionaryTracker
let data_gen = IpcDataGenerator::default();
let mut dict_tracker = DictionaryTracker::new(false);

// Then, send the batch data
let mut dictionary_tracker = arrow::ipc::writer::DictionaryTracker::new(false);
// First, encode and send schema
let encoded_schema = data_gen.schema_to_bytes_with_dictionary_tracker(
batch.schema().as_ref(),
&mut dict_tracker,
&options,
);

let (encoded_dictionaries, encoded_batch) = data_gen
.encoded_batch(batch, &mut dictionary_tracker, &options)
.map_err(|e| Status::internal(format!("Failed to encode batch: {}", e)))?;
let schema_flight_data = FlightData {
flight_descriptor: None,
app_metadata: vec![].into(),
data_header: encoded_schema.ipc_message.into(),
data_body: vec![].into(),
};
flight_data_vec.push(schema_flight_data);

// Add dictionary batches if any
for dict_batch in encoded_dictionaries {
flight_data_vec.push(dict_batch.into());
}
// Then, send the batch data
let (encoded_dictionaries, encoded_batch) = data_gen
.encoded_batch(batch, &mut dict_tracker, &options)
.map_err(|e| FlameError::Internal(format!("Failed to encode batch: {}", e)))?;

// Add the data batch
flight_data_vec.push(encoded_batch.into());
// Add dictionary batches if any
for dict_batch in encoded_dictionaries {
flight_data_vec.push(dict_batch.into());
}

// Add the data batch
flight_data_vec.push(encoded_batch.into());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The batch_to_flight_data_vec function has been significantly refactored to remove the use of arrow_flight::utils::flight_data_from_arrow_batch and implement the encoding manually. While this might resolve specific issues or provide more control, it's a substantial change to the data serialization logic. Could you please provide more context on why this change was necessary? Was flight_data_from_arrow_batch causing problems, or is this a performance/control optimization?

Signed-off-by: Klaus Ma <klausm@nvidia.com>
@k82cn k82cn merged commit aa83585 into xflops:main Jan 23, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant