Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenWebUI hangs when used with Ramalama #8802

Closed
5 tasks done
vpavlin opened this issue Jan 23, 2025 · 5 comments
Closed
5 tasks done

OpenWebUI hangs when used with Ramalama #8802

vpavlin opened this issue Jan 23, 2025 · 5 comments

Comments

@vpavlin
Copy link

vpavlin commented Jan 23, 2025

Bug Report


Installation Method

Installed via Docker (ghcr.io/open-webui/open-webui:main)

Environment

  • Open WebUI Version: v0.5.4

  • Ollama (if applicable):

  • Operating System: Debian 12

  • Browser (if applicable):
    Confirmation:

  • I have read and followed all the instructions provided in the README.md.

  • I am on the latest version of both Open WebUI and Ollama.

  • I have included the browser console logs.

  • I have included the Docker container logs.

  • I have provided the exact steps to reproduce the bug in the "Steps to Reproduce" section below.

Expected Behavior:

When there is an error in reaching a backend, Open WebUI should still be loadable

Actual Behavior:

After an error being observed in Docker logs, the UI never loads again

Description

Bug Summary:
I've been playing around with https://github.com/containers/ramalama (cc @ericcurtin) and thought I'll try to use ramalama serve with Open WebUI (although probably not supported combo:) )

After configuring Ramalama as a backend (IIUC ramalama serve uses llama.cpp HTTP server, which should be compatible with Ollama API?) and hitting the Manage button, the Open WebUI freezes and never loads again. I have to restart the container to get it working.

Reproduction Details

Steps to Reproduce:

  1. Run open-webui
  2. Run ramalama serve -d qwen2.5-coder:1.5b, this will start a container with API on port 8080
  3. Add the ramalama endpoint to OpenWebUI as Ollama instance
  4. Hit Manage button
  5. No error given, but UI is not loadable

Logs and Screenshots

Browser Console Logs:
Empty

Docker Container Logs:

...
Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 52582.16it/s]
INFO:     Started server process [1]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO:     192.168.0.122:44760 - "GET /admin/settings HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/config HTTP/1.1" 200 OK
INFO:     ('192.168.0.122', 44770) - "WebSocket /ws/socket.io/?EIO=4&transport=websocket" [accepted]
INFO:     connection open
INFO:     192.168.0.122:44760 - "GET /api/v1/auths/ HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/config HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/changelog HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/v1/users/user/settings HTTP/1.1" 200 OK
INFO  [open_webui.routers.openai] get_all_models()
INFO  [open_webui.routers.ollama] get_all_models()
INFO:     127.0.0.1:54776 - "GET /api/tags HTTP/1.1" 200 OK
ERROR [open_webui.routers.ollama] Connection error: 200, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url='http://localhost:8080/api/tags'
INFO:     192.168.0.122:44760 - "GET /api/models HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/v1/configs/banners HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/v1/tools/ HTTP/1.1" 200 OK
INFO:     192.168.0.122:44752 - "GET /api/v1/channels/ HTTP/1.1" 200 OK
INFO:     192.168.0.122:44752 - "GET /api/v1/auths/admin/config HTTP/1.1" 200 OK
INFO:     192.168.0.122:44784 - "GET /api/webhook HTTP/1.1" 200 OK
INFO:     192.168.0.122:44796 - "GET /api/v1/auths/admin/config/ldap/server HTTP/1.1" 200 OK
INFO:     192.168.0.122:44774 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 OK
INFO:     192.168.0.122:44774 - "GET /api/v1/auths/admin/config/ldap HTTP/1.1" 200 OK
INFO:     192.168.0.122:44796 - "GET /api/v1/chats/pinned HTTP/1.1" 200 OK
INFO:     192.168.0.122:44796 - "GET /api/v1/folders/ HTTP/1.1" 200 OK
INFO:     192.168.0.122:44774 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/version/updates HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /api/v1/chats/?page=2 HTTP/1.1" 200 OK
INFO:     192.168.0.122:44760 - "GET /ollama/config HTTP/1.1" 200 OK
INFO:     192.168.0.122:44774 - "GET /openai/config HTTP/1.1" 200 OK

Screenshots/Screen Recordings (if applicable):
[Attach any relevant screenshots to help illustrate the issue]

Additional Information

[Include any additional details that may help in understanding and reproducing the issue. This could include specific configurations, error messages, or anything else relevant to the bug.]

Note

If the bug report is incomplete or does not follow the provided instructions, it may not be addressed. Please ensure that you have followed the steps outlined in the README.md and troubleshooting.md documents, and provide all necessary information for us to reproduce and address the issue. Thank you!

@ericcurtin
Copy link

In RamaLama we use llama-server from llama.cpp and/or vllm server. So to be compatible with ramalama is to be compatible with those.

@vpavlin
Copy link
Author

vpavlin commented Jan 23, 2025

Yeah, this is not that much about it not working with Ramalama, but more about trying to use non-Ollama API completely breaks the webui and it cannot be recovered from without restarting the service.

Making Ramalama/llama-server API actually work would be a separate endeavour:)

@ericcurtin
Copy link

One feature that would help this is:

containers/ramalama#598

in RamaLama, but Open WebUI should not require this, it should work with popular OpenAI API-compatible APIs like llama-server from llama.cpp and vllm also.

Unless of course Open WebUI group is happy to be an Ollama-specific tool. 😄

@tjbck
Copy link
Contributor

tjbck commented Jan 23, 2025

They do not offer Ollama compatible API endpoints.

@tjbck tjbck closed this as completed Jan 23, 2025
@vpavlin
Copy link
Author

vpavlin commented Jan 27, 2025

Correct, but then your software should report error, but not break completely;)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants