Skip to content

Conversation

@michaelsembwever
Copy link
Member

https://github.com/riptano/cndb/issues/16174

Port into main-5.0 commit 97eaa52

CNDB-16174: CNDB-15469: Implement NVQ for vector graphs built by compaction

Fixes: https://github.com/riptano/cndb/issues/15469
CNDB PR: https://github.com/riptano/cndb/pull/15813

This PR integrates the jvector NVQ feature into SAI vector indexes built via compaction. This feature is disabled by default (by `cassandra.sai.vector.enable_nvq`) to continue providing the best recall when storage savings is not an explicit concern. The jvector library describes NVQ with:

> Support for Non-uniform Vector Quantization (NVQ, pronounced as "new
vec"). This new technique quantizes the values in each vector with high accuracy by first applying a nonlinear transformation that is individually fit to each vector. These nonlinearities are designed to be lightweight and have a negligible impact on distance computation performance.

This feature is only available in SAI on disk version `EC` and later. It can be enabled by setting `cassandra.sai.vector.enable_nvq` to `true` and selecting `cassandra.sai.latest.version=ec` or greater.

When enabled, we can expect NVQ to reduce the storage footprint of the graph (stored in the `TERMS` file) because quantized vectors are stored inline instead of the full precision vectors. A possible result of storing these smaller vectors is fewer iops due to improved efficiency of a graph node fitting within a single 4 kb page.

We do not have any new metrics exposed to track this feature beyond disk utilization.

When troubleshooting, this log line will help determine what features an on disk graph is using:

        logger.debug("Opened graph for {} for sstable row id offset {} with {} features", source, segmentMetadata.segmentRowIdOffset, features);

NVQ will be in the list if it is in use.

tl;dr: NVQ works for earlier versions of CC because the on disk format hasn't changed and jvector knows how to read it. If you enable NVQ on CC without this PR and with `ann_use_synthetic_score = true`, you might see out of order results.

One side effect of NVQ is that the NVQ vector similarity score is slightly different than the full precision score. This is primarily a problem when the synthetic score is in use
(`cassandra.sai.ann_use_synthetic_score`) because the synthetic score was based on the score from the index. Now that this score does not necessarily equal the FP sim score, we must compute the FP sim score before sending the synthetic score to the coordinator. Otherwise, we will end up with out of order vectors.

Because older versions of CC do not correct for this, it is possible to send the wrong score to the coordinator. However, because this feature is disabled by default, there is not really a risk of sending the wrong score.

@github-actions
Copy link

github-actions bot commented Dec 8, 2025

Checklist before you submit for review

  • This PR adheres to the Definition of Done
  • Make sure there is a PR in the CNDB project updating the Converged Cassandra version
  • Use NoSpamLogger for log lines that may appear frequently in the logs
  • Verify test results on Butler
  • Test coverage for new/modified code is > 80%
  • Proper code formatting
  • Proper title for each commit staring with the project-issue number, like CNDB-1234
  • Each commit has a meaningful description
  • Each commit is not very long and contains related changes
  • Renames, moves and reformatting are in distinct commits
  • All new files should contain the DataStax copyright header instead of the Apache License one

@michaelsembwever
Copy link
Member Author

Copy link

@djatnieks djatnieks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are failures in VectorCompaction100dTest and VectorSiftSmallTest that may be relevant to check out further.

…action (#2030)

+ CNDB-16055: Use tmpFileFor in CompactionGraph to prevent CNDB from remote loading (#2133)

Fixes: riptano/cndb#15469
CNDB PR: riptano/cndb#15813

This PR integrates the jvector NVQ feature into SAI vector indexes built
via compaction. This feature is disabled by default (by
`cassandra.sai.vector.enable_nvq`) to continue providing the best recall
when storage savings is not an explicit concern. The jvector library
describes NVQ with:

> Support for Non-uniform Vector Quantization (NVQ, pronounced as "new
vec"). This new technique quantizes the values in each vector with high
accuracy by first applying a nonlinear transformation that is
individually fit to each vector. These nonlinearities are designed to be
lightweight and have a negligible impact on distance computation
performance.

This feature is only available in SAI on disk version `EC` and later. It
can be enabled by setting `cassandra.sai.vector.enable_nvq` to `true`
and selecting `cassandra.sai.latest.version=ec` or greater.

When enabled, we can expect NVQ to reduce the storage footprint of the
graph (stored in the `TERMS` file) because quantized vectors are stored
inline instead of the full precision vectors. A possible result of
storing these smaller vectors is fewer iops due to improved efficiency
of a graph node fitting within a single 4 kb page.

We do not have any new metrics exposed to track this feature beyond disk
utilization.

When troubleshooting, this log line will help determine what features an
on disk graph is using:

```java
        logger.debug("Opened graph for {} for sstable row id offset {} with {} features", source, segmentMetadata.segmentRowIdOffset, features);
```

NVQ will be in the list if it is in use.

tl;dr: NVQ works for earlier versions of CC because the on disk format
hasn't changed and jvector knows how to read it. If you enable NVQ on CC
without this PR and with `ann_use_synthetic_score = true`, you might see
out of order results.

One side effect of NVQ is that the NVQ vector similarity score is
slightly different than the full precision score. This is primarily a
problem when the synthetic score is in use
(`cassandra.sai.ann_use_synthetic_score`) because the synthetic score
was based on the score from the index. Now that this score does not
necessarily equal the FP sim score, we must compute the FP sim score
before sending the synthetic score to the coordinator. Otherwise, we
will end up with out of order vectors.

Because older versions of CC do not correct for this, it is possible to
send the wrong score to the coordinator. However, because this feature
is disabled by default, there is not really a risk of sending the wrong
score.

CNDB-16055: Use tmpFileFor in CompactionGraph to prevent CNDB from remote loading (#2133)

Fixes: riptano/cndb#16055

#2030 introduced a bug for
CNDB due to the way we inject the file reader. The solution is actually
quite simple, I just needed to follow a convention that I wasn't
familiar with. By calling `tmpFileFor`, I get the proper file extension
(which cleans the file in case of restart) and I get the local only
file.
@sonarqubecloud
Copy link

@michaelsembwever
Copy link
Member Author

There are failures in VectorCompaction100dTest and VectorSiftSmallTest that may be relevant to check out further.

they work locally for me. in jenkins they are timing out…

@michaelsembwever
Copy link
Member Author

michaelsembwever commented Dec 12, 2025

There are failures in VectorCompaction100dTest and VectorSiftSmallTest that may be relevant to check out further.
they work locally for me. in jenkins they are timing out…

they remain flaky with timeouts in jenkins.
what should we do about this @djatnieks ? merge the PR and raise them as a separate issue to address later ?

I've looked into the logs and don't see anything that stands out– it's busy the whole time (though I wonder if it's thrashing repeatedly on something it shouldn't be…)

log from jenkins: TEST-org.apache.cassandra.index.sai.cql.VectorSiftSmallTest.log

@cassci-bot
Copy link

@djatnieks
Copy link

what should we do about this @djatnieks ? merge the PR and raise them as a separate issue to address later ?

Yes, I think that is a good plan since they pass locally and let's us keep moving forward. Maybe someone from the query team could take a look too if we have a ticket.

@michaelsembwever michaelsembwever merged commit d345a8b into main-5.0 Dec 12, 2025
579 of 594 checks passed
@michaelsembwever michaelsembwever deleted the mck-cndb-16174-main-5.0 branch December 12, 2025 21:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants