Skip to content

Commit

Permalink
Merge pull request #5 from redis-developer/feat/improve-conditional-l…
Browse files Browse the repository at this point in the history
…ogic

Feat/improve conditional logic
  • Loading branch information
rbs333 authored Jan 24, 2025
2 parents edeed3f + 87da881 commit ccd0508
Show file tree
Hide file tree
Showing 8 changed files with 64 additions and 48 deletions.
Binary file added .DS_Store
Binary file not shown.
27 changes: 14 additions & 13 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,15 +150,15 @@ Open [participant_agent/graph.py](./participant_agent/graph.py)

> To see an example of creating a graph and adding a node, see the [LangGraph docs](https://langchain-ai.github.io/langgraph/tutorials/introduction/#part-1-build-a-basic-chatbot)
- Uncomment lines 26-47
- Delete line 48 (graph = None) - this is just a placeholder.
- Uncomment boilerplate (below the first TODO)
- Delete `graph = None` at the bottom of the file - this is just a placeholder.
- Define node 1, the agent, by passing a label `"agent"` and the code to execute at that node `call_tool_model`
- Define node 2, the tool node, by passing the label `"tools"` and the code to be executed at that node `tool_node`
- Set the entrypoint for your graph at `"agent"`
- Add a **conditional edge** with label `"agent"` and function `tools_condition`
- Add a normal edge between `"tools"` and `"agent"`

Run `test_trail_agent` to see if you pass the first scenario.
Run `test_trail_agent` if you saved the alias or `pytest --disable-warnings -vv -rP test_participant_oregon_trail.py` to see if you pass the first scenario.

If you didn't pass the first test **ask for help!**.

Expand Down Expand Up @@ -187,27 +187,28 @@ Ex: `restock formula tool used specifically for calculating the amount of food a
At this stage, you may notice that your agent is returning a "correct" answer to the question but not in the **format** the test script expects. The test script expects answers to multiple choice questions to be the single character "A", "B", "C", or "D". This may seem contrived, but often in production scenarios agents will be expected to work with existing deterministic systems that will require specific schemas. For this reason, LangChain supports an LLM call `with_structured_output` so that response can come from a predictable structure.

### Steps:
- Open [participant_agent/utils/state.py](participant_agent/utils/state.py) and uncomment the multi_choice_response attribute on the state parameter. To this point our state has only had one attribute called `messages` but we are adding a specific field that we will add structured outputs to.
- also observe the defined pydantic model in this file for our output
- Open [participant_agent/utils/nodes.py](participant_agent/utils/nodes.py) and pass the pydantic class defined in state to the `with_structured_output` function
- Open [participant_agent/utils/state.py](participant_agent/utils/state.py) and uncomment the multi_choice_response attribute on the state parameter and delete the pass statement. Up to this point our state had only one attribute called `messages` but we are adding a specific field for our structured multi-choice response.
- Also observe the defined `pydantic` model in this file for our output
- Open [participant_agent/utils/nodes.py](participant_agent/utils/nodes.py) and pass the pydantic class defined in state to the `with_structured_output` function.
- Update the graph to support a more advanced flow (see image below)
- Add a node for our `multi_choice_structured` this takes the messages after our tool calls and uses an LLM to format as we expect.
- Add a conditional edge after the agent that determines if a multi-choice formatting is appropriate (see example)
- Update the `is_multi_choice` function in the nodes file to return the appropriate strings
- Add an edge that goes from `multi_choice_structured` to `END`
- Add a node called `structure_response` and pass it the `structure_response` function.
- This function determines if the question is multiple choice. If yes, it use the with_structured_output model you updated. If no, it returns directly to end.
- Add a conditional edge utilizing the `should_continue` function defined for you in the file (See example below).
- Finally, add an edge that goes from `structure_response` to `END`

### Conditional edge example:

```python
workflow.add_conditional_edges(
"agent",
is_multi_choice, # function in nodes that returns a string ["multi-choice", "not-multi-choice"]
{"multi-choice": "multi_choice_structured", "not-multi-choice": END}, # based on the string returned from the function instructs graph to route to a given node
should_continue,
{"continue": "tools", "structure_response": "structure_response"},
)
```

### Visual of your updated graph:
![multi_choice](images/multi_choice_graph.png)<br>

![multi_choice](images/multi_graph.png)<br>

Run `test_trail_agent` to see if you pass

Expand Down
38 changes: 18 additions & 20 deletions example_agent/ex_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,8 @@

from dotenv import load_dotenv
from langgraph.graph import END, StateGraph
from langgraph.prebuilt import (
tools_condition, # this is the checker for the if you got a tool back
)

from example_agent.utils.ex_nodes import (
call_tool_model,
is_multi_choice,
multi_choice_structured,
tool_node,
)
from example_agent.utils.ex_nodes import call_tool_model, structure_response, tool_node
from example_agent.utils.ex_state import AgentState

load_dotenv()
Expand All @@ -22,35 +14,41 @@ class GraphConfig(TypedDict):
model_name: Literal["anthropic", "openai"]


# Define the function that determines whether to continue or not
def should_continue(state: AgentState):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we respond to the user
if not last_message.tool_calls:
return "structure_response"
# Otherwise if there is, we continue
else:
return "continue"


# Define a new graph
workflow = StateGraph(AgentState, config_schema=GraphConfig)

# Define the two nodes we will cycle between
workflow.add_node("agent", call_tool_model)
# workflow.add_node("respond", respond)
workflow.add_node("tools", tool_node)
workflow.add_node("multi_choice_structured", multi_choice_structured)
workflow.add_node("structure_response", structure_response)

# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.set_entry_point("agent")

# We now add a conditional edge
workflow.add_conditional_edges(
"agent",
tools_condition,
)

# We now add a conditional edge between `agent` and `tools`.
workflow.add_conditional_edges(
"agent",
is_multi_choice,
{"multi-choice": "multi_choice_structured", "not-multi-choice": END},
should_continue,
{"continue": "tools", "structure_response": "structure_response"},
)

# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("tools", "agent")
workflow.add_edge("multi_choice_structured", END)
workflow.add_edge("structure_response", END)


# Finally, we compile it!
Expand Down
15 changes: 10 additions & 5 deletions example_agent/utils/ex_nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,22 @@ def multi_choice_structured(state: AgentState, config):
}


# Logical function for next step in graph execution
# determine how to structure final response
def is_multi_choice(state: AgentState):
if "options:" in state["messages"][0].content.lower():
return "multi-choice"
return "options:" in state["messages"][0].content.lower()


def structure_response(state: AgentState, config):
if is_multi_choice(state):
return multi_choice_structured(state, config)
else:
return "not-multi-choice"
# if not multi-choice don't need to do anything
return {"messages": []}


system_prompt = """
You are an oregon trail playing tool calling AI agent. Use the tools available to you to answer the question you are presented. When in doubt use the tools to help you find the answer.
If anyone asks your first name is Artificial return just that string.
If anyone asks your first name is Art return just that string.
"""


Expand Down
Binary file removed images/multi_choice_graph.png
Binary file not shown.
Binary file added images/multi_graph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 13 additions & 6 deletions participant_agent/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,7 @@
tools_condition, # this is the checker for the if you got a tool back
)

from participant_agent.utils.nodes import (
call_tool_model,
is_multi_choice,
multi_choice_structured,
tool_node,
)
from participant_agent.utils.nodes import call_tool_model, structure_response, tool_node
from participant_agent.utils.state import AgentState

load_dotenv()
Expand All @@ -22,6 +17,18 @@ class GraphConfig(TypedDict):
model_name: Literal["openai"] # could add more LLM providers here


# Define the function that determines whether to continue or not
def should_continue(state: AgentState):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we respond to the user
if not last_message.tool_calls:
return "structure_response"
# Otherwise if there is, we continue
else:
return "continue"


# TODO: define the graph to be used in testing
# workflow = StateGraph(AgentState, config_schema=GraphConfig)

Expand Down
13 changes: 9 additions & 4 deletions participant_agent/utils/nodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,17 @@ def multi_choice_structured(state: AgentState, config):
}


# Logical function for next step in graph execution
# determine how to structure final response
def is_multi_choice(state: AgentState):
if "options:" in state["messages"][0].content.lower():
return "multi-choice"
return "options:" in state["messages"][0].content.lower()


def structure_response(state: AgentState, config):
if is_multi_choice(state):
return multi_choice_structured(state, config)
else:
return "not-multi-choice"
# if not multi-choice don't need to do anything
return {"messages": []}


###
Expand Down

0 comments on commit ccd0508

Please sign in to comment.