-
Notifications
You must be signed in to change notification settings - Fork 3k
added possible issues and solutions #184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
added possible issues and solutions #184
Conversation
WalkthroughAdds an “Issue You may face” troubleshooting section to rag-with-dockling/README.md documenting two issues (pickle error with Streamlit caching and app slowness due to memory) with explanations and code snippets. No code or API changes. Changes
Sequence Diagram(s)Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
rag-with-dockling/README.md (5)
22-22
: Pluralize and normalize heading capitalizationImprove readability and polish:
-## 🐞 Issue You may face +## 🐞 Issues you may face
23-30
: Make the Streamlit workaround snippet copy-pasteableAdd explicit imports so readers can drop this into their app without guessing dependencies.
-> ```python -> if "llm_client" not in st.session_state: -> st.session_state.llm_client = Ollama(model="llama3.2") -> ``` +> ```python +> import streamlit as st +> from llama_index.llms.ollama import Ollama +> +> if "llm_client" not in st.session_state: +> st.session_state.llm_client = Ollama(model="llama3.2") +> ```Optional: You could also add a one-liner that clarifies the root cause, e.g., “Some client objects aren’t picklable; avoid caching them with @st.cache_resource—prefer session state.”
32-33
: Fix grammar and casing in section title and textMinor grammar improvements and brand casing.
-### 2. App Extremely Slow (incase Used another Large Model) -This often happens due to high memory usage. `ollama` was observed using 10+ GB RAM on an 8 GB Mac, leading to heavy swapping. Large embedding models like `bge-large-en-v1.5` also consume significant memory. +### 2. App is extremely slow (in case you used a large model) +This often happens due to high memory usage. `Ollama` was observed using 10+ GB RAM on an 8 GB Mac, leading to heavy swapping. Large embedding models like `bge-large-en-v1.5` also consume significant memory.
37-39
: Fix nested list indentation to satisfy markdownlint (MD007)Indent nested bullets by two spaces (not four).
-> * `qwen2:1.5b` -> * `llama3.2:1b` -> * `mistral:7b-instruct-q4_K_M` +> * `qwen2:1.5b` +> * `llama3.2:1b` +> * `mistral:7b-instruct-q4_K_M`
35-44
: Optional additions to help reduce memory pressure furtherConsider adding one more tip that often helps users:
> ✅ **Solution**: > * **Use smaller Ollama models**: > * `qwen2:1.5b` > * `llama3.2:1b` > * `mistral:7b-instruct-q4_K_M` > * **Use smaller embeddings**: > ```python > HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5") > ``` +> * **Prefer quantized variants and smaller context windows** (e.g., `q4_K_M`, reduce `context_window` where supported) to cut peak RAM usage.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
rag-with-dockling/README.md
(1 hunks)
🧰 Additional context used
🪛 LanguageTool
rag-with-dockling/README.md
[grammar] ~23-~23: There might be a mistake here.
Context: ...e You may face ### 1. Pickle Error with @st.cache_resource
The app might fail with: `An error occur...
(QB_NEW_EN)
[grammar] ~32-~32: There might be a mistake here.
Context: ...y Slow (incase Used another Large Model) This often happens due to high memory us...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ...e significant memory. > ✅ Solution: > * Use smaller Ollama models: > * `qw...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
rag-with-dockling/README.md
37-37: Unordered list indentation
Expected: 2; Actual: 4
(MD007, ul-indent)
38-38: Unordered list indentation
Expected: 2; Actual: 4
(MD007, ul-indent)
39-39: Unordered list indentation
Expected: 2; Actual: 4
(MD007, ul-indent)
🔇 Additional comments (1)
rag-with-dockling/README.md (1)
22-44
: Nice, pragmatic troubleshooting sectionThe two issues you documented are common and your remedies are practical. These will save users time.
Summary by CodeRabbit