docs/tutorials/qa_chat_history/ #27956
Replies: 16 comments 12 replies
-
it can be a silly question, but please help me: |
Beta Was this translation helpful? Give feedback.
-
The import code: vector_store = InMemoryVectorStore(embeddings) vector_store = InMemoryVectorStore(embeddings) I think you wrongly added a "_" that will cause import error |
Beta Was this translation helpful? Give feedback.
-
Hello!
|
Beta Was this translation helpful? Give feedback.
-
Hi there, |
Beta Was this translation helpful? Give feedback.
-
Hi |
Beta Was this translation helpful? Give feedback.
-
The memory saver resend all the previous messages (Human and AI message) to LLM so the LLM has context. Wouldn't it be easy and consume less resource if we could the multi-turn chat session from LLM itself like this: https://ai.google.dev/gemini-api/docs/text-generation?lang=node#chat? |
Beta Was this translation helpful? Give feedback.
-
Great tutorials thanks team. Do the Thanks again. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
Great tutorials thanks team. I am a nerd, so I draw a graph to help me understand how it works. Hope it helps you too: |
Beta Was this translation helpful? Give feedback.
-
Great tutorial. Thank you. I am trying the "create_react_agent" method with Anthropic model (used ChatAnthropic from langchain library). When I ran the message from the tutorial following error is what I received: == Error message starts == But when I tried with Open AI GPT 4o, it worked just fine. What changes should I make when I try it with Anthropic model? Following is a code: == Code starts == sonnet_3_5_model = ChatAnthropic( memory = MemorySaver() memory_config = {"configurable": {"thread_id": "def234"}} inputs = {"messages": [("user", "What's your name?")]} for s in agent_executor.stream(inputs, stream_mode="values", config=memory_config,): == Code ends == |
Beta Was this translation helpful? Give feedback.
-
Hi, may I ask how to fix the error below: NotImplementedError: It seems that there is a problem with the function 'query_or_respond'? I used the same code as this tutorial. Welcome to any comments, idea or possible solutions, thanks~ |
Beta Was this translation helpful? Give feedback.
-
StateGraph generate step produces empty "Ai Message"
^^^ Where is the response? ^^^ I check the log output of the
The response |
Beta Was this translation helpful? Give feedback.
-
How would one pass metadata filtering arguments at input time to this? Basically, I want the user to be able to select a "source" to further make the retrieval process more accurate. |
Beta Was this translation helpful? Give feedback.
-
https://api.python.langchain.com/en/latest/tools/langchain.tools.retriever.create_retriever_tool.html makes a reference to this tutorial. How does |
Beta Was this translation helpful? Give feedback.
-
AttributeError: module 'mlflow' has no attribute 'trace' |
Beta Was this translation helpful? Give feedback.
-
docs/tutorials/qa_chat_history/
This guide assumes familiarity with the following concepts:
https://python.langchain.com/docs/tutorials/qa_chat_history/
Beta Was this translation helpful? Give feedback.
All reactions