You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and this bug is not already filed.
I believe this is a legitimate bug, not just a question or feature request.
Describe the bug
I deployed lightrag according to the official steps, but my python3.10 and ollama local large model generation case will result in an incomplete knowledge graph. What should I do?
My operating system is Mac OS, and the local large models are gemma2:2b and nomic-embed-text:latest
Steps to reproduce
Follow the official video
Expected Behavior
No response
LightRAG Config Used
Paste your config here
Logs and screenshots
No response
Additional Information
LightRAG Version:
Operating System:
Python Version:
Related Issues:
The text was updated successfully, but these errors were encountered:
had a similar issue. was able to reduce 'orphan' entities be using a better LLM and also reducing chunk size to a value between 200 to 400 and overlap to a value between 20-100.
I am facing similar issue. Which better LLM are you referring to ? @BireleyX . I have this set up in Mac Mini M4, 16GB RAM - 2024 Model. I am also using local large models are gemma2:2b and nomic-embed-text:latest
Do you need to file an issue?
Describe the bug
I deployed lightrag according to the official steps, but my python3.10 and ollama local large model generation case will result in an incomplete knowledge graph. What should I do?
Steps to reproduce
Follow the official video
Expected Behavior
No response
LightRAG Config Used
Paste your config here
Logs and screenshots
No response
Additional Information
The text was updated successfully, but these errors were encountered: