Skip to content

GroupVarInt Encoding Implementation for HNSW Graphs #14932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

aylonsk
Copy link

@aylonsk aylonsk commented Jul 10, 2025

Description

For HNSW Graphs, the alternate encoding I implemented was GroupVarInt encoding, which in theory should be less costly both in space and runtime. The pros of this encoding would be that it allocates all of the space for a group of 4 integers in advance, and that it can encode using all 8 bits per byte instead of the 7 for VarInt. The cons are that it can only encode integers (<=32bits), and uses the first byte to encode the size of each number. However, since we are using delta encoding to condense our integers, they will never be larger than 32bits, making this irrelevant.

Closes #12871

Copy link

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@benwtrent
Copy link
Member

Hi @aylonsk ! Thank you for digging into this issue. I am sure you are still working on it, but I had some feedback:

  • It would be interesting to get statistics around resulting index size changes and performance changes (index & search). Lucene util is the preferred tool for this: GroupVarInt Encoding Implementation for HNSW Graphs #14932

  • As with most Lucene formats, changes like this need to be backwards compatible. Readers are loaded via their names. Consequently, users might have indices with the Lucene99Hnsw format name that do not have group-varint applied, and consequently cannot be read by your change here. There are a couple of options to handle this:

    • Add versioning to the format
    • Create a new format (Lucene103Hnsw...) and move Lucene99Hnsw... to the bwc formats package for readers (there are many example PRs in the past doing this).

Handling the format change can be complicated. So, my first step would be to justify the change with performance metrics. Then do all the complicated format stuff.

Good luck!

@aylonsk
Copy link
Author

aylonsk commented Jul 10, 2025

Thanks for your response! My apologies, I forgot to post my results from LuceneUtil.

Because I noticed variance between each run, I decided to test each set of hyperparameters 10 times and take the median for latency, netCPU, and AvgCpuCount. Therefore, my results aren't in the standard table format.

I ran 12 comparison tests in total, each a different combination of HPs. Here were the variables I kept the same: (topK=100, fanout=50, beamWidth=250, numSegments=1)

Here are some specific tests:

BENCHMARKS (10 runs per test):

  1. Base HP’s: nDocs=500,000, maxConn=64, quantized=no, numSegments=1

Baseline:
Recall: 0.832
Latency (Median): 0.73 ms
NetCPU (Median) 0.708 ms
AvgCPUCount (Median): 0.973 ms
Index Size: 220.55MB
Vec Disk/Vec RAM: 190.735MB

Candidate:
Recall: 0.835
Latency (Median): 0.7 ms
NetCPU (Median) 0.677 ms
AvgCPUCount (Median): 0.966 ms
Index Size: 220.12MB
Vec Disk/Vec RAM: 190.735MB

Latency Improvement: ~4.11% speedup

  1. nDocs=500,000, maxConn=32, quantized=no, numSegments=1

Baseline:
Recall: 0.834
Latency (Median): 0.722 ms
NetCPU (Median): 0.701 ms
AvgCPUCount (Median): 0.966 ms
Index Size: 220.19MB
Vec Disk/Vec RAM: 190.735MB

Candidate:
Recall: 0.83
Latency (Median): 0.691 ms
NetCPU (Median): 0.665 ms
AvgCPUCount (Median): 0.96 ms
Index Size: 219.67MB
Vec Disk/Vec RAM: 190.735MB

Latency Improvement: ~4.3% speedup

  1. nDocs=500,000, maxConn=32, quantized=7bits, numSegments=1

Baseline:
Recall: 0.671
Latency (Median): 1.2935 ms
NetCPU (Median): 1.2635 ms
AvgCpuCount (Median): 0.976 ms
Index Size: 255.74 ms
Vec Disk: 240.326MB
Vec RAM: 49.591MB

Candidate:
Recall: 0.696
Latency (Median): 1.2525 ms
NetCPU (Median): 1.192 ms
AvgCPUCount (Median): 0.974 ms
Index Size: 259.34MB
Vec Disk: 240.326MB
Vec RAM: 49.591MB

Latency Improvement: ~3.17% speedup

  1. nDocs=2,000,000, maxConn=32, quantized=7bits, numSegments=1

Baseline:
Recall: 0.74
Latency (Median): 2.6675 ms
NetCPU (Median): 2.545 ms
AvgCpuCount (Median): 0.969 ms
Index Size: 1049.52MB
Vec Disk: 961.30MB
Vec RAM: 198.364MB

Candidate:
Recall: 0.717
Latency (Median): 2.521 ms
NetCPU (Median): 2.398 ms
AvgCPUCount (Median): 0.98 ms
Index Size: 1043.27MB
Vec Disk: 961.304MB
Vec RAM: 198.364MB

Latency Improvement: 5.49% speedup

  1. nDocs=100,000, maxConn=64, quantized=7bits, numSegments=1

Baseline:
Recall: 0.848
Latency (Median): 2.305
NetCPU (Median): 2.2575
AvgCpuCount (Median): 0.976
Index Size: 51.52MB
Vec Disk: 48.07MB
Vec RAM: 9.918MB

Candidate:
Recall: 0.848
Latency (Median): 1.85 ms
NetCPU (Median): 1.80 ms
AvgCPUCount (Median): 0.974 ms
Index Size: 51.52MB
Vec Disk: 48.07MB
Vec RAM: 9.918MB

Latency Improvement: ~18.1% speedup

While the degree of improvement varied between tests, all tests except 1 showed improvement in latency over the baseline. Considering how simple and non-intrusive this implementation is, I think it would be an easy net benefit.

Thank you for letting me know about the backwards compatibility requirement. I will look into fixing that tomorrow.

@benwtrent
Copy link
Member

@aylonsk great looking numbers! I expect for cheaper vector ops (e.g. single bit quantization), the impact is even higher.

@jpountz
Copy link
Contributor

jpountz commented Jul 18, 2025

@aylonsk To handle backward compatibility, I'd recommend doing the following:

  • Add a new version constant to the format class, something like VERSION_GROUP_VARINT = 1; VERSION_CURRENT = VERSION_GROUP_VARINT.
  • Add a int version parameter to a pkg-private constructor of this format.
  • Pass this version to the writer, write it in the codec header, and update the writer to use group varint when version >= 1, and vint otherwise.
  • Make the reader read the version from the codec header, use group varint when version >= 1, and vint otherwise.
  • Copy TestLucene99HnswVectorsFormat into a new test case that exercises version=VERSION_START.

@aylonsk
Copy link
Author

aylonsk commented Jul 29, 2025

Hello, and thank you for all of your suggestions. I have updated the reader and format files accordingly to allow for backwards compatibility using a VERSION_GROUPVARINT parameter in the format class, and an interface near the top level of the reader class to make impact on runtime minimal.

The testing part was trickier, as I needed to create a new class (TestLucene99HnswVectorsFormatV2) that would extend the same class (BaseKnnVectorsFormatTestCase) as the original TestLucene99HnswVectorsFormat, but with a getCodec() method that would return the a format with the old writer and the new reader. At first I thought I would have to create my own test to read, write, and check docIDs in HNSW graphs, but then I realized that there are already tests in the BaseKnnVectorsFormatTestCase class that do this (such as testRecall).

To make this possible, I created two new classes in the lucene99 backwards_codecs directory: A VarInt-only writer (Lucene99HnswVectorsWriterV0) and a format that returns the current backwards-compatible reader in its fieldsReader class and the VarInt-only writer in its fieldsWriter class. To confirm the validity of the test, a VarInt-only reader was also created but not commited (Lucene99HnswVectorsReaderV0), and when I flipped the format class to using the new writer and the old reader, the testRecall test failed.

Any questions/comments/suggestions are appreciated. Thank you!

Copy link

This PR does not have an entry in lucene/CHANGES.txt. Consider adding one. If the PR doesn't need a changelog entry, then add the skip-changelog label to it and you will stop receiving this reminder on future updates to the PR.

@@ -212,7 +213,7 @@ public KnnVectorsReader fieldsReader(SegmentReadState state) throws IOException

@Override
public int getMaxDimensions(String fieldName) {
return 1024;
return 4096;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably don't want to make this change? At least not as part of this PR :)

@@ -76,6 +79,7 @@ public final class Lucene99HnswVectorsReader extends KnnVectorsReader
private final FieldInfos fieldInfos;
private final IntObjectHashMap<FieldEntry> fields;
private final IndexInput vectorIndex;
private final Populator dataReader;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think dataReader will confuse people since that is the name of a class (DataReader). Maybe call it decoder? Or neighborDecoder?

@@ -0,0 +1,233 @@
/*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think we are introducing these backward_codecs in order to be able to write graphs in the old VInt (v0) format in order to test that we are able to read them back with the existing codec?

Given that, I think any additional classes we add here could live in the tests rather than in backward_codecs, since we won't need these for reading old indexes (which is what backward_codecs are for).

Having said that, I wonder if we could add a test-only constructor to the format that would enable it to continue writing the old format?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we have a package private constructor that allows setting the version for the writer? That way tests can write with the old version if necessary?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think that is what you mean by "test only ctor".

I agree, a new package private ctor is likely best and easiest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cut over HNSW's neighbor lists to group-varint?
4 participants