Add Ollama as a Provider, allowing some Features to work with locally hosted LLMs #845
+3,134
−17
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of the Change
Ollama allows you to easily run various LLMs on your own computer. This provides a couple key benefits:
This PR integrates Ollama with a number of our existing Features, depending on which model you use:
If using a standard model:
If using a vision model:
If using an embedding model:
This allows you to test quite a few of the Features ClassifAI provides without any cost or data concerns. This could also be used on production sites, though worth noting a few downsides to running these models locally:
That said, there really isn't any reason you couldn't use Ollama as a Provider on an actual production site. You'd just need to ensure that any user that wants to use those Features has Ollama installed and configured on their individual computers.
Closes #772
How to test the Change
ollama pull llama3.1
as a base LLM;ollama pull nomic-embed-text
as the embedding LLM;ollama pull llava
as the vision LLM but can see all available models hereChangelog Entry
Credits
Props @dkotter
Checklist: