-
-
Notifications
You must be signed in to change notification settings - Fork 8.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenWebUI hangs when used with Ramalama #8802
Comments
In RamaLama we use llama-server from llama.cpp and/or vllm server. So to be compatible with ramalama is to be compatible with those. |
Yeah, this is not that much about it not working with Ramalama, but more about trying to use non-Ollama API completely breaks the webui and it cannot be recovered from without restarting the service. Making Ramalama/llama-server API actually work would be a separate endeavour:) |
One feature that would help this is: in RamaLama, but Open WebUI should not require this, it should work with popular OpenAI API-compatible APIs like llama-server from llama.cpp and vllm also. Unless of course Open WebUI group is happy to be an Ollama-specific tool. 😄 |
They do not offer Ollama compatible API endpoints. |
Correct, but then your software should report error, but not break completely;) |
Bug Report
Installation Method
Installed via Docker (ghcr.io/open-webui/open-webui:main)
Environment
Open WebUI Version: v0.5.4
Ollama (if applicable):
Operating System: Debian 12
Browser (if applicable):
Confirmation:
I have read and followed all the instructions provided in the README.md.
I am on the latest version of both Open WebUI and Ollama.
I have included the browser console logs.
I have included the Docker container logs.
I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.
Expected Behavior:
When there is an error in reaching a backend, Open WebUI should still be loadable
Actual Behavior:
After an error being observed in Docker logs, the UI never loads again
Description
Bug Summary:
I've been playing around with https://github.com/containers/ramalama (cc @ericcurtin) and thought I'll try to use
ramalama serve
with Open WebUI (although probably not supported combo:) )After configuring Ramalama as a backend (IIUC
ramalama serve
usesllama.cpp
HTTP server, which should be compatible with Ollama API?) and hitting the Manage button, the Open WebUI freezes and never loads again. I have to restart the container to get it working.Reproduction Details
Steps to Reproduce:
ramalama serve -d qwen2.5-coder:1.5b
, this will start a container with API on port 8080Manage
buttonLogs and Screenshots
Browser Console Logs:
Empty
Docker Container Logs:
Screenshots/Screen Recordings (if applicable):
[Attach any relevant screenshots to help illustrate the issue]
Additional Information
[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]
Note
If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!
The text was updated successfully, but these errors were encountered: