Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Ollama integration not working. #5610

Closed
ZaEvab55555 opened this issue Jan 28, 2025 · 7 comments · Fixed by #5665
Closed

[Bug] Ollama integration not working. #5610

ZaEvab55555 opened this issue Jan 28, 2025 · 7 comments · Fixed by #5665
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models released

Comments

@ZaEvab55555
Copy link

📦 Environment

Docker

📌 Version

Latest Docker, Windows 11

💻 Operating System

Windows

🌐 Browser

Other

🐛 Bug Description

When I try to use Lobe Chat with oLLama, it tells me "{
"error": {
"message": "fetch failed",
"name": "TypeError"
},
"provider": "ollama"
}"
Why does it do that?

📷 Recurrence Steps

I start ollama and start the lobe chat server. Then I navigate to the local server and try to chat.

🚦 Expected Behavior

I thought it would respond to my prompt of "What is lobechat?"

📝 Additional Information

No response

@ZaEvab55555 ZaEvab55555 added the 🐛 Bug Something isn't working | 缺陷 label Jan 28, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Jan 28, 2025
@lobehubbot
Copy link
Member

👀 @ZaEvab55555

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Jan 28, 2025

Hey @ZaEvab55555! I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together!

The "fetch failed" TypeError you're encountering with the Ollama integration in Lobe Chat on Windows 11 using Docker can be resolved by enabling the client request mode in LobeChat. When this mode is disabled, the Ollama model configuration will fail with a 'fetch failed' TypeError. Make sure that client request mode is enabled in your LobeChat configuration to fix the issue [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Jan 28, 2025
Copy link

dosubot bot commented Jan 28, 2025

The "fetch failed" TypeError when using Ollama integration with Lobe Chat in a Docker environment on Windows 11 can be resolved by enabling the client request mode in LobeChat. When client request mode is disabled, the Ollama model configuration will fail with this error. Ensure that client request mode is enabled in your LobeChat configuration to fix the issue [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@ZaEvab55555
Copy link
Author

@dosu what about this error?
{
"error": {
"message": "[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash-thinking-exp-01-21:streamGenerateContent?alt=sse: [400 ] Function calling is not enabled for models/gemini-2.0-flash-thinking-exp-01-21"
},
"provider": "google"
}

Copy link

dosubot bot commented Jan 28, 2025

The error message indicates that function calling is not enabled for the specified Google Generative AI model. To enable function calling for Google Generative AI models in Lobe Chat, ensure that the model's description includes 'function calling' or 'tools'. This is checked in the transformModel method of the LobeHigressAI class, which enables function calling based on the model's description [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@lobehubbot
Copy link
Member

@ZaEvab55555

This issue is closed, If you have any questions, you can comment and reply.
此问题已经关闭。如果您有任何问题,可以留言并回复。

@lobehubbot
Copy link
Member

🎉 This issue has been resolved in version 1.49.11 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models released
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants