Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 19 additions & 6 deletions store_handler/data_store_service_client_closure.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1276,11 +1276,17 @@ void LoadRangeSliceCallback(void *data,

std::string key_str, value_str;
uint64_t ts, ttl;
uint64_t snapshot_ts = fill_store_slice_req->SnapshotTs();
for (uint32_t i = 0; i < items_size; i++)
{
scan_next_closure->GetItem(i, key_str, value_str, ts, ttl);
txservice::TxKey key =
catalog_factory->CreateTxKey(key_str.data(), key_str.size());
if (i == items_size - 1)
{
fill_store_slice_req->kv_start_key_ =
std::string_view(key.Data(), key.Size());
}
std::unique_ptr<txservice::TxRecord> record =
catalog_factory->CreateTxRecord();
bool is_deleted = false;
Expand Down Expand Up @@ -1308,18 +1314,25 @@ void LoadRangeSliceCallback(void *data,
std::abort();
}

if ((snapshot_ts == 0 && is_deleted) ||
(snapshot_ts > 0 && snapshot_ts > ts))
{
// if it is not a snapshot read, there is no need to return
// deleted keys.
// If it is a snapshot read and the latest version is newer than
// snapshot read ts, we need to backfill deleted key since there
// might be visible archive version of this key for snapshot ts.
// The caller will decide if reading archive table is necessary
// based on the deleted key version.
continue;
}
Comment on lines +1317 to +1328
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Get file size to understand how to read it safely
wc -l store_handler/data_store_service_client_closure.cpp

Repository: eloqdata/tx_service

Length of output: 119


🏁 Script executed:

# View the disputed lines 1317-1328 with context
sed -n '1300,1350p' store_handler/data_store_service_client_closure.cpp | cat -n

Repository: eloqdata/tx_service

Length of output: 2404


🏁 Script executed:

# Also view the similar pattern at lines 219-220 mentioned in the review
sed -n '210,230p' store_handler/data_store_service_client_closure.cpp | cat -n

Repository: eloqdata/tx_service

Length of output: 1105


🏁 Script executed:

# Search for function names containing the target lines to understand context
rg -nP 'LoadRangeSliceCallback|FetchRecordCallback' store_handler/data_store_service_client_closure.cpp | head -20

Repository: eloqdata/tx_service

Length of output: 289


Critical: Snapshot filtering condition is inverted.

The condition snapshot_ts > ts at line 1319 contradicts both the comment and MVCC semantics:

  • Comment states: "the latest version is newer than snapshot read ts" (implies ts > snapshot_ts)
  • Code checks: snapshot_ts > ts (opposite condition)
  • MVCC rule: A snapshot at time S should skip records with ts > S (records newer than snapshot)
  • Confirming pattern (lines 219-220): Uses snapshot_read_ts_ < rec_ts_ to detect when a record is too new and requires archive fetch

The condition should be ts > snapshot_ts instead of snapshot_ts > ts to correctly skip records newer than the snapshot timestamp.

🤖 Prompt for AI Agents
In @store_handler/data_store_service_client_closure.cpp around lines 1317-1328,
The snapshot filtering condition is inverted: inside the if-block that checks
(snapshot_ts == 0 && is_deleted) || (snapshot_ts > 0 && snapshot_ts > ts) you
should flip the comparison to (ts > snapshot_ts) so it reads (snapshot_ts == 0
&& is_deleted) || (snapshot_ts > 0 && ts > snapshot_ts); update that condition
in the same if used for skipping deleted/newer records (variables snapshot_ts,
ts, is_deleted) so the code matches the comment and MVCC semantics (skip records
newer than snapshot) and retains the existing continue/backfill behavior.


if (!is_deleted)
{
record->Deserialize(value_str.data(), offset);
}
}

if (i == items_size - 1)
{
fill_store_slice_req->kv_start_key_ =
std::string_view(key.Data(), key.Size());
}

fill_store_slice_req->AddDataItem(
std::move(key), std::move(record), ts, is_deleted);
}
Expand Down
6 changes: 5 additions & 1 deletion tx_service/include/cc/template_cc_map.h
Original file line number Diff line number Diff line change
Expand Up @@ -5421,7 +5421,11 @@ class TemplateCcMap : public CcMap
range_ptr->FindSlice(slice_key);

assert(slice->PostCkptSize() != UINT64_MAX);
return slice->PostCkptSize() > StoreSlice::slice_upper_bound;
// Only need to split the slice when the post ckpt size of the slice
// is greater than the current size and greater than the slice upper
// bound.
return (slice->PostCkptSize() > StoreSlice::slice_upper_bound &&
slice->PostCkptSize() > slice->Size());
};

const KeyT *const req_start_key = req.start_key_ != nullptr
Expand Down
5 changes: 4 additions & 1 deletion tx_service/src/cc/local_cc_shards.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5462,8 +5462,11 @@ void LocalCcShards::UpdateSlices(const TableName &table_name,
: false;

uint64_t slice_post_ckpt_size = curr_slice->PostCkptSize();
// If post ckpt size of the slice is less than or equal to the current
// size, there is no need to split the slice.
if (slice_post_ckpt_size == UINT64_MAX ||
slice_post_ckpt_size <= StoreSlice::slice_upper_bound)
slice_post_ckpt_size <= StoreSlice::slice_upper_bound ||
slice_post_ckpt_size <= curr_slice->Size())
{
// Case 1: There is no unpersisted data in the current slice, so no
// need to split the slice. Only to migrate this data from the old
Expand Down