-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partial loading implementation for FAISS HNSW #2405
base: main
Are you sure you want to change the base?
Partial loading implementation for FAISS HNSW #2405
Conversation
Please note that will make sure all |
|
||
package org.opensearch.knn.partialloading; | ||
|
||
public class KdyPerfCheck { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is temp class for tracking performance.
Will be removed before merging to main.
@@ -106,7 +106,7 @@ public void flush(int maxDoc, final Sorter.DocMap sortMap) throws IOException { | |||
final QuantizationState quantizationState = train(field.getFieldInfo(), knnVectorValuesSupplier, totalLiveDocs); | |||
// Check only after quantization state writer finish writing its state, since it is required | |||
// even if there are no graph files in segment, which will be later used by exact search | |||
if (shouldSkipBuildingVectorDataStructure(totalLiveDocs)) { | |||
if (false /*TMP*/ && shouldSkipBuildingVectorDataStructure(totalLiveDocs)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is temp code. Will revert it back before merging.
Partial Loading Code Review Breaks Down1. GoalThis document provides a comprehensive overview of a big PR on partial loading to minimize the time required for reviewers to complete a review. 2. ScopeDesign Document : RFC 1. Supported Vector Types
2.. Supported Metrics
3.. Filtered Query
4.. Nested Vectors
5.. Sparse Vector Documents
3. Break DownsThe PR can be divided into two main parts, with the search part further split into five subparts:
4. [Part 1] Index partial loading
5. [Part 2] Search2.1. Partial Loading Basic Framework
2.2. Normal Case — Happy PathThis is the straightforward case with no filtering IDs, parent IDs, and all documents having indexed vectors.
2.3. Having a FilteringWith Filtering:
No Integer List Conversion:
2.4. Having Parent IdsParent IDs Handling:
Conversion to BitSet:
Grouper Creation:
Parent-Level BFS in HNSW:
2.5. Sparse Vector Documents
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still working on this PR
- Should the cache key change here for partial loading? This depends on how we are planning to launch it but might be a good idea to change the cache key to avoid any unpredictable behavior
@@ -87,6 +91,15 @@ public NativeMemoryAllocation.IndexAllocation load(NativeMemoryEntryContext.Inde | |||
final Directory directory = indexEntryContext.getDirectory(); | |||
final int indexSizeKb = Math.toIntExact(directory.fileLength(vectorFileName) / 1024); | |||
|
|||
// TMP | |||
final PartialLoadingMode partialLoadingMode = PartialLoadingMode.DISABLED; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am little confused between mapping and setting. What was decided on whether to use mapping or setting?
Ideally if the performance and recall is equal we should eventually have an option of deprecating something that is not memory-effecient. Will having a mapping make it a one way door?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Mapping is preferred as we want it to be configured at field level.
- You mean MEMORY_EFFICIENT mode's performance and recall are equal to the baseline where loading everything into memory right? Its performance never can be equal to the baseline as it involves load costs. Even when MMapDirectory was configured, it was shown that FAISS baseline had the best performance.
The whole point of MEMORY_EFFICIENT is to give an option to users to operate big vector index within a memory constraints environment.
); | ||
} | ||
|
||
private void validatePartialLoadingSupported(NativeMemoryEntryContext.IndexEntryContext indexEntryContext, KNNEngine knnEngine) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This validation seems too late, are there any validations like this while creating the mapping?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree! My strategy is to have two separate PRs: 1. Core logic for partial loading 2. Extending mapping
And this PR has the core logic, and will make sure the early validation to be made during mapping creation as you suggested.
Will add TODO.
@RequiredArgsConstructor | ||
@Getter | ||
public class PartialLoadingContext implements Closeable { | ||
private final FaissIndex faissIndex; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need faissIndex here? should we decouple?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial loading context has required FAISS components for searching.
And FaissIndex a top level index that will recursively delegate search and get the search results.
Partial loading context will be defined in IndexAllocation after 'partial load' in native load strategy.
Then in KNNWeight, it retrieves the context to get the entry point for searching. I think we need to have it here.
public class PartialLoadingContext implements Closeable { | ||
private final FaissIndex faissIndex; | ||
private final String vectorFileName; | ||
private final PartialLoadingMode partialLoadingMode; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this mode needed here? just curious
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When loading in native memory load strategy, it will retrieve the mode from mapping then put it in here.
Then in KNNWeight, after it acquired the context, it will creates partial load search strategy based on the mode without having to look up mapping.
if (indexInput != null) { | ||
return indexInput.clone(); | ||
} | ||
indexInput = directory.openInput(vectorFileName, IOContext.RANDOM); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Let client worry about the IOContext/ReadAdvice here?
- Can we initialize open in the constructor, I am not really comfortable with the functionality of get and open being combined
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Sure, I think we should let partial load search strategy to handle the low level IOContext/ReadAdvice.
- Sure, will update in the next revision to make it initialize in the constructor.
Map<Integer, Float> result = doExactSearch(context, docs, cardinality, k); | ||
if (isExactSearchRequire(context, matchDocsCardinality, docIdsToScoreMap.size())) { | ||
final BitSetIterator docs = filterWeight != null ? new BitSetIterator(filterBitSet, matchDocsCardinality) : null; | ||
Map<Integer, Float> result = doExactSearch(context, docs, matchDocsCardinality, k); | ||
return new PerLeafResult(filterWeight == null ? null : filterBitSet, result); | ||
} | ||
return new PerLeafResult(filterWeight == null ? null : filterBitSet, docIdsToScoreMap); | ||
} | ||
|
||
private BitSet getFilteredDocsBitSet(final LeafReaderContext ctx) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For partial loading you can always pass in liveDocs (irrespective of filter weight) and take care of deleted docs while doing the search instead of post filtering, currently its not done because the liveDocs need to be converted to array to pass to JNI layer which might add latencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can pass liveDocs
to partial loading and let the components directly filter documents based on it.
But I intentionally tried to make it to return the equivalent output to the same input to minimize the possible side effects that might be coming from it.
I just had a hunch that subtle difference introduced at the beginning is likely gradually bring more subtle differences will make us hard to debug at some point, so I tried to make sure same output will be returned to the same input.
Current baseline only passes liveDocs (technically, an int array) when there's a filter provided.
@@ -96,6 +109,45 @@ public NativeMemoryAllocation.IndexAllocation load(NativeMemoryEntryContext.Inde | |||
} | |||
} | |||
|
|||
private NativeMemoryAllocation.IndexAllocation createPartialLoadedIndexAllocation( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we create a separate class which extends NativeMemoryLoadStrategy
for this? and possibly PartialIndexAllocation
pojo to hold the context?
This will simplify the code and isolate partialLoading related code ideally under one fork rather than an if fork in each class. Let me know how it turns out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decision whether to partial load or not is being made within NativeMemoryLoadStrategy::load
by fetching mode
from mapping. I think it would be good to have PartialIndexAllocation
to separate the loading logic, but it seems hard to have a subclass of NativeMemoryLoadStrategy
.
Will factor out partial loading related logics to PartialIndexAllocation
as you suggested in the next revision.
throw new UnsupportedOperationException("Partial loading does not support radius query with k=0"); | ||
} | ||
|
||
final PartialLoadingSearchStrategy searchStrategy = partialLoadingContext.getPartialLoadingMode().createSearchStrategy(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than having an interface specifically for PartialLoadingSearchStrategy have you considered NativeIndexSearchStrategy?
. Its implementation would be MemoryEfficientSearchStrategy
that will use partial loading and DefaultSearchStrategy
which just loads the entire index and calls all the jni layer. You will need a common set of parameters that you need to pass to these strategies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like the idea, only thing bugging me is that it would be cumbersome to define common set of parameters as those required parameters are varied between partial loading and the baseline.
It is natural as we're trying to abstract out two different mechanisms -- partial loading vs calling C++ FAISS.
- Partial loading is fine with bitset, but baseline needs long array
- Generally, partial loading needs more parameters that baseline does not need at all.
I'm open to it, but I have an impression that we're forcefully fitting two different mechanisms under one interface.
Please share your thoughts.
Signed-off-by: Dooyong Kim <[email protected]>
Signed-off-by: Dooyong Kim <[email protected]>
c42a6bf
to
1539547
Compare
Will clean up commit messages before squash merge |
By passing cache managerPartial loading design was inspired by Lucene, where Since we rely on NRT reader, which periodically triggers If we could fully delegate this responsibility to NRT reader, maintenance will become much easier. I'll assess the required efforts, and if feasible, will refactor in the next revision. Otherwise, will take an iterative approach in subsequent PRs. cc @shatejas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really conformable with the change we are doing to enable Partial Loading , from the change it fells like we are re-writing whole faiss search path again in Java. I know we must have discussed during design review but did not anticipate the magnitude of change.
After looking at Faiss Library , Faiss has in built capability to partailly load the file the way IndexInput is doing for us using MMAP
Faiss already supports Mmap based loading
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
// I/O code for indexes
#ifndef FAISS_INDEX_IO_H
#define FAISS_INDEX_IO_H
#include <cstdio>
#include <string>
#include <typeinfo>
#include <vector>
/** I/O functions can read/write to a filename, a file handle or to an
* object that abstracts the medium.
*
* The read functions return objects that should be deallocated with
* delete. All references within these objectes are owned by the
* object.
*/
namespace faiss {
struct Index;
struct IndexBinary;
struct VectorTransform;
struct ProductQuantizer;
struct IOReader;
struct IOWriter;
struct InvertedLists;
/// skip the storage for graph-based indexes
const int IO_FLAG_SKIP_STORAGE = 1;
void write_index(const Index* idx, const char* fname, int io_flags = 0);
void write_index(const Index* idx, FILE* f, int io_flags = 0);
void write_index(const Index* idx, IOWriter* writer, int io_flags = 0);
void write_index_binary(const IndexBinary* idx, const char* fname);
void write_index_binary(const IndexBinary* idx, FILE* f);
void write_index_binary(const IndexBinary* idx, IOWriter* writer);
// The read_index flags are implemented only for a subset of index types.
const int IO_FLAG_READ_ONLY = 2;
// strip directory component from ondisk filename, and assume it's in
// the same directory as the index file
const int IO_FLAG_ONDISK_SAME_DIR = 4;
// don't load IVF data to RAM, only list sizes
const int IO_FLAG_SKIP_IVF_DATA = 8;
// don't initialize precomputed table after loading
const int IO_FLAG_SKIP_PRECOMPUTE_TABLE = 16;
// don't compute the sdc table for PQ-based indices
// this will prevent distances from being computed
// between elements in the index. For indices like HNSWPQ,
// this will prevent graph building because sdc
// computations are required to construct the graph
const int IO_FLAG_PQ_SKIP_SDC_TABLE = 32;
// try to memmap data (useful to load an ArrayInvertedLists as an
// OnDiskInvertedLists)
const int IO_FLAG_MMAP = IO_FLAG_SKIP_IVF_DATA | 0x646f0000;
Index* read_index(const char* fname, int io_flags = 0);
Index* read_index(FILE* f, int io_flags = 0);
Index* read_index(IOReader* reader, int io_flags = 0);
IndexBinary* read_index_binary(const char* fname, int io_flags = 0);
IndexBinary* read_index_binary(FILE* f, int io_flags = 0);
IndexBinary* read_index_binary(IOReader* reader, int io_flags = 0);
void write_VectorTransform(const VectorTransform* vt, const char* fname);
void write_VectorTransform(const VectorTransform* vt, IOWriter* f);
VectorTransform* read_VectorTransform(const char* fname);
VectorTransform* read_VectorTransform(IOReader* f);
ProductQuantizer* read_ProductQuantizer(const char* fname);
ProductQuantizer* read_ProductQuantizer(IOReader* reader);
void write_ProductQuantizer(const ProductQuantizer* pq, const char* fname);
void write_ProductQuantizer(const ProductQuantizer* pq, IOWriter* f);
void write_InvertedLists(const InvertedLists* ils, IOWriter* f);
InvertedLists* read_InvertedLists(IOReader* reader, int io_flags = 0);
} // namespace faiss
#endif
In case if we need to make additinal change to make faiss work , it would be very minimal . But with this change , we are going to have really good challenging with maintaince.
I would like to have one more broader discussion on this just to re-visit things once again.
src/main/java/org/opensearch/knn/partialloading/faiss/hnsw/FaissHNSW.java
Show resolved
Hide resolved
src/main/java/org/opensearch/knn/partialloading/faiss/hnsw/FaissHNSW.java
Show resolved
Hide resolved
Thank you @Vikasht34 for taking it a look.
And agree with the maintenance costs. But on the flip side, with this, user could run FAISS in memory constraints environment which solves the current FAISS limitation that requires memory heavy, expensive hardware. |
Let's have one more discussion among us.
Let's Discuss this in detail , I think we need to revisit again some part of it. |
Description
RFC : #2401
OpenSearch KNN plugin supports three engines: NMSLIB, FAISS, and Lucene.
The first two native engines, NMSLIB and FAISS, require all vector-related data structures (such as HNSW graphs) to be loaded into memory for search operation.
For large workloads, this memory cost can quickly become substantial if quantization techniques are not applied.
Therefore, 'Partial Loading' must be enabled as an option in native engines to control the available memory for KNN search. The objective of partial loading is twofold:
To allow users to control the maximum memory available for KNN searching.
To enable native engines to partially load only the necessary data within the constraint.
If we look closely a HNSW graph mainly consist of below things:
Full precision 32 bit vectors.
Graph representations.
Metadata like dimensions, number of vectors, space type, headers etc.
From the above items, main memory is used by these full precision vectors 4 bytes * the number of vectors * the number of dimension.
The way FAISS stores these vectors is in a Flat Index and during serialization and deserialization these vectors are written and read to/from the file and put in the main memory which increases the memory consumption.
Related Issues
Resolves #[Issue number to be closed when this PR is merged]
#2401
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.