Skip to content

Adds Ollama support #4251

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 29, 2025
Merged

Adds Ollama support #4251

merged 1 commit into from
Apr 29, 2025

Conversation

axosoft-ramint
Copy link
Contributor

Adds support for Ollama as a provider (requires Ollama server to be configured and running).

Prompts for URL when chosen or used as a provider and one is not configured.

Shows messaging when no models are installed.

To use:

  1. Download Ollama: https://ollama.com/download
  2. Install a model from the library: https://ollama.com/library (note: you can use any terminal and write, for example, ollama run llama3.3 if Ollama is installed and configured)
  3. Be sure Ollama is running, then use the "switch AI model" flow and choose Ollama, then put in your server URL (default is http://localhost:11434 if you're running it locally.
  4. Your installed models should show up in a list. Choose one and you're good to go.

Closes #3311

Co-authored-by: Ramin Tadayon <[email protected]>
@eamodio eamodio merged commit 3782aad into main Apr 29, 2025
Copy link
Member

@eamodio eamodio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Local AI providers
3 participants