You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Updated the changelog with features since last release, see 1.7.3_release...main for details. Please comment if you want to highlight anything that I've missed.
Pull Request resolved: #2820
Reviewed By: mdouze
Differential Revision: D44922916
Pulled By: mlomeli1
fbshipit-source-id: db16754698af4dd0fb8dddff7ec9885170a3d5c4
Copy file name to clipboardexpand all lines: CHANGELOG.md
+36-1
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,40 @@ We try to indicate most contributions here with the contributor names who are no
9
9
the Facebook Faiss team. Feel free to add entries here if you submit a PR.
10
10
11
11
## [Unreleased]
12
+
## [1.7.4] - 2023-04-12
13
+
### Added
14
+
- Added big batch IVF search for conducting efficient search with big batches of queries
15
+
- Checkpointing in big batch search support
16
+
- Precomputed centroids support
17
+
- Support for iterable inverted lists for eg. key value stores
18
+
- 64-bit indexing arithmetic support in FAISS GPU
19
+
- IndexIVFShards now handle IVF indexes with a common quantizer
20
+
- Jaccard distance support
21
+
- CodePacker for non-contiguous code layouts
22
+
- Approximate evaluation of top-k distances for ResidualQuantizer and IndexBinaryFlat
23
+
- Added support for 12-bit PQ / IVFPQ fine quantizer decoders for standalone vector codecs (faiss/cppcontrib)
24
+
- Conda packages for osx-arm64 (Apple M1) and linux-aarch64 (ARM64) architectures
25
+
- Support for Python 3.10
26
+
27
+
### Removed
28
+
- CUDA 10 is no longer supported in precompiled packages
29
+
- Removed Python 3.7 support for precompiled packages
30
+
- Removed constraint for using fine quantizer with no greater than 8 bits for IVFPQ, for example, now it is possible to use IVF256,PQ10x12 for a CPU index
31
+
32
+
### Changed
33
+
- Various performance optimizations for PQ / IVFPQ for AVX2 and ARM for training (fused distance+nearest kernel), search (faster kernels for distance_to_code() and scan_list_*()) and vector encoding
34
+
- A magnitude faster CPU code for LSQ/PLSQ training and vector encoding (reworked code)
35
+
- Performance improvements for Hamming Code computations for AVX2 and ARM (reworked code)
36
+
- Improved auto-vectorization support for IP and L2 distance computations (better handling of pragmas)
37
+
- Improved ResidualQuantizer vector encoding (pooling memory allocations, avoid r/w to a temporary buffer)
38
+
39
+
### Fixed
40
+
- HSNW bug fixed which improves the recall rate! Special thanks to zh Wang @hhy3 for this.
41
+
- Faiss GPU IVF large query batch fix
42
+
- Faiss + Torch fixes, re-enable k = 2048
43
+
- Fix the number of distance computations to match max_codes parameter
0 commit comments