Vector Search & LLM Integration #5552
Open
+439
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a proof of concept for some level of somewhat native LLM integration.
This manages vector indexing, searching and LLM querying, with vectors stored & queried in the local BookStack database, and an external LLM service (OpenAI, Ollama etc...) queried for embeddings & query execution using locally found vector-based results for context.
The result is a LLM generated response to user query, with a list of relevant BookStack content used for the query as reference.
Issues & Questionables
all-minilm
=384,openai:text-embedding-3-small
=1536,openai:text-embedding-3-large
=3072vector(1536)
mariadb column seemed to result in 0 hex values. Seems like the column size has to be correct, can't insert under?Considerations
Todo