Skip to content

Commit

Permalink
code review comments. simplyfying scenario 1 condition, vector store …
Browse files Browse the repository at this point in the history
…simplification
  • Loading branch information
r-carroll committed Feb 8, 2025
1 parent c26b665 commit 4bf83f6
Show file tree
Hide file tree
Showing 8 changed files with 36 additions and 90 deletions.
17 changes: 1 addition & 16 deletions Ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,7 @@
1. Download and install [Ollama](https://ollama.com/)
2. Once Ollama is running on your system, run `ollama pull llama3.1`
> Currently this is a ~5GB download, it's best to download it before the workshop if you plan on using it
3. `ollama pull nomic-embed-text`
4. Update the `MODEL_NAME` in your `dot.env` file to `ollama`

Once you are running ollama, it is not necessary to configure an openai api key.

When you get to the system prompt section of the workshop, llama requires that you are a bit more explicit with your instructions. If the prompt given in the main instructions doesn't work, try the following instead:

```
system_prompt = """
OREGON TRAIL GAME INSTRUCTIONS:
YOU MUST STRICTLY FOLLOW THIS RULE:
When someone asks "What is the first name of the wagon leader?", your ENTIRE response must ONLY be the word: Art
For all other questions, use available tools to provide accurate information.
"""
```
3. Update the `MODEL_NAME` in your `dot.env` file to `ollama`

You're now ready to begin the workshop! Head back to the [Readme.md](Readme.md)

Expand Down
10 changes: 4 additions & 6 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ In this workshop, we are going to use [LangGraph](https://langchain-ai.github.io
- [openai api key](https://platform.openai.com/docs/quickstart)

## (Optional) Ollama
This workshop is optimized to run targeting OpenAI models. If you prefer to run locally however, you may do so via Ollama.
This workshop is optimized to run targeting OpenAI models. If you prefer to run locally however, you may do so via the experimental Ollama configuration.
* [Ollama setup instructions](Ollama.md)

## (Optional) helpers
Expand Down Expand Up @@ -239,12 +239,10 @@ In our scenario we want to be able to retrieve the time-bound information that t

### Steps:
- Open [participant_agent/utils/vector_store.py](participant_agent/utils/vector_store.py)
- Find the corresponding `get_vector_store` method either for openai or ollama
- If using openai: where `vector_store=None` update to `vector_store = RedisVectorStore.from_documents(<docs>, <embedding_model>, config=<config>)` with the appropriate variables.

> For `<embedding model>`, keep in mind whether you are using openai or ollama. If using ollama, the `model` parameter should be set to `nomic-embed-text` \
[OpenAI embeddings](https://python.langchain.com/docs/integrations/text_embedding/openai/) \
- Take note of how `embedding_model` is getting instantiated. If using Ollama then switch this for the appropriate embedding using `llama3.1` for the `model` parameter
> [OpenAI embeddings](https://python.langchain.com/docs/integrations/text_embedding/openai/) \
[Ollama embeddings](https://python.langchain.com/docs/integrations/text_embedding/ollama/)
- Where `vector_store=None` update to `vector_store = RedisVectorStore.from_documents(<docs>, <embedding_model>, config=<config>)` with the appropriate variables.

- Open [participant_agent/utils/tools.py](participant_agent/utils/tools.py)
- Uncomment code for retrieval tool
Expand Down
15 changes: 4 additions & 11 deletions example_agent/utils/ex_nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,17 +72,10 @@ def structure_response(state: AgentState, config):
# if not multi-choice don't need to do anything
return {"messages": []}

if environ_model_name == "openai":
system_prompt = """
You are an oregon trail playing tool calling AI agent. Use the tools available to you to answer the question you are presented. When in doubt use the tools to help you find the answer.
If anyone asks your first name is Art return just that string.
"""
elif environ_model_name == "ollama":
system_prompt = """
OREGON TRAIL GAME INSTRUCTIONS:
YOU MUST STRICTLY FOLLOW THIS RULE:
When someone asks "What is the first name of the wagon leader?", your ENTIRE response must ONLY be the word: Art
"""
system_prompt = """
You are an oregon trail playing tool calling AI agent. Use the tools available to you to answer the question you are presented. When in doubt use the tools to help you find the answer.
If anyone asks your first name is Art return just that string.
"""

# Define the function that calls the model
def call_tool_model(state: AgentState, config):
Expand Down
40 changes: 13 additions & 27 deletions example_agent/utils/ex_vector_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from langchain_ollama import OllamaEmbeddings
from redis import Redis
from langchain_redis import RedisConfig, RedisVectorStore

load_dotenv()
Expand All @@ -12,45 +13,30 @@
INDEX_NAME = os.environ.get("VECTOR_INDEX_NAME", "oregon_trail")

config = RedisConfig(index_name=INDEX_NAME, redis_url=REDIS_URL)
redis_client = Redis.from_url(REDIS_URL)

doc = Document(
page_content="the northern trail, of the blue mountains, was destroyed by a flood and is no longer safe to traverse. It is recommended to take the southern trail although it is longer."
)

# TODO: participant can change to whatever desired model
embedding_model = OpenAIEmbeddings()
# embedding_model = OllamaEmbeddings(model="llama3.1")

def get_vector_store():
if os.environ.get("MODEL_NAME") == "ollama":
return __get_ollama_vector_store()
elif os.environ.get("MODEL_NAME") == "openai":
return __get_openai_vector_store()

def __check_existing_embedding(vector_store):
results = vector_store.similarity_search(doc, k=1)
if not results:
raise Exception("Required content not found in existing store")

def __get_ollama_vector_store():
try:
config.from_existing = True
vector_store = RedisVectorStore(OllamaEmbeddings(model="llama3"), config=config)
__check_existing_embedding(vector_store)
except:
print("Init vector store with document")
config.from_existing = False
vector_store = RedisVectorStore.from_documents(
[doc], OllamaEmbeddings(model="nomic-embed-text"), config=config
)
return vector_store
def _clean_existing(prefix):
for key in redis_client.scan_iter(f"{prefix}:*"):
redis_client.delete(key)

def __get_openai_vector_store():
def get_vector_store():
try:
config.from_existing = True
vector_store = RedisVectorStore(OpenAIEmbeddings(), config=config)
__check_existing_embedding(vector_store)
vector_store = RedisVectorStore(embedding_model, config=config)
except:
print("Init vector store with document")
print("Clean any existing data in index")
_clean_existing(config.index_name)
config.from_existing = False
vector_store = RedisVectorStore.from_documents(
[doc], OpenAIEmbeddings(), config=config
[doc], embedding_model, config=config
)
return vector_store
38 changes: 11 additions & 27 deletions participant_agent/utils/vector_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,45 +12,29 @@
INDEX_NAME = os.environ.get("VECTOR_INDEX_NAME", "oregon_trail")

config = RedisConfig(index_name=INDEX_NAME, redis_url=REDIS_URL)
redis_client = Redis.from_url(REDIS_URL)

doc = Document(
page_content="the northern trail, of the blue mountains, was destroyed by a flood and is no longer safe to traverse. It is recommended to take the southern trail although it is longer."
)

# TODO: participant can change to whatever desired model
embedding_model = OpenAIEmbeddings()

def get_vector_store():
if os.environ.get("MODEL_NAME") == "ollama":
return __get_ollama_vector_store()
elif os.environ.get("MODEL_NAME") == "openai":
return __get_openai_vector_store()

def __check_existing_embedding(vector_store):
results = vector_store.similarity_search(doc, k=1)
if not results:
raise Exception("Required content not found in existing store")

def __get_ollama_vector_store():
try:
config.from_existing = True
vector_store = RedisVectorStore(OllamaEmbeddings(model="llama3"), config=config)
__check_existing_embedding(vector_store)
except:
print("Init vector store with document")
config.from_existing = False

# TODO: define vector store for ollama
vector_store = None
return vector_store
def _clean_existing(prefix):
for key in redis_client.scan_iter(f"{prefix}:*"):
redis_client.delete(key)

def __get_openai_vector_store():
def get_vector_store():
try:
config.from_existing = True
vector_store = RedisVectorStore(OpenAIEmbeddings(), config=config)
__check_existing_embedding(vector_store)
vector_store = RedisVectorStore(embedding_model, config=config)
except:
print("Init vector store with document")
print("Clean any existing data in index")
_clean_existing(config.index_name)
config.from_existing = False

# TODO: define vector store for openai
# TODO: define vector store
vector_store = None
return vector_store
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
langgraph==0.2.56
langchain==0.3.13
langchain-openai==0.2.3
langchain-ollama==0.2.2
langchain-ollama==0.2.3
langchain-redis==0.1.1
pydantic==2.9.2
python-dotenv==1.0.1
Expand Down
2 changes: 1 addition & 1 deletion test_example_oregon_trail.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def test_1_wagon_leader(app):

res = graph.invoke({"messages": scenario["question"]})

assert res["messages"][-1].content == scenario["answer"]
assert scenario["answer"] in res["messages"][-1].content

print(f"\n response: {scenario['answer']}")

Expand Down
2 changes: 1 addition & 1 deletion test_participant_oregon_trail.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def test_1_wagon_leader(app):

res = graph.invoke({"messages": scenario["question"]})

assert res["messages"][-1].content == scenario["answer"]
assert scenario["answer"] in res["messages"][-1].content

print(f"\n response: {scenario['answer']}")

Expand Down

0 comments on commit 4bf83f6

Please sign in to comment.