-
Notifications
You must be signed in to change notification settings - Fork 18.9k
agents: pass config to tools in AgentExecutor
#32773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
agents: pass config to tools in AgentExecutor
#32773
Conversation
… chains This resolves issue langchain-ai#28848 where calling bind_tools() on a RunnableSequence created by with_structured_output() would fail with AttributeError. The fix enables the combination of structured output and tool binding, which is essential for modern AI applications that need both: - Structured JSON output formatting - External function calling capabilities **Changes:** - Added bind_tools() method to RunnableSequence class - Method intelligently detects structured output patterns - Delegates tool binding to the underlying ChatModel - Preserves existing sequence structure and behavior - Added comprehensive unit tests **Technical Details:** - Detects 2-step sequences (Model < /dev/null | Parser) from with_structured_output() - Binds tools to the first step if it supports bind_tools() - Returns new RunnableSequence with updated model + same parser - Falls back gracefully with helpful error messages **Impact:** This enables previously impossible workflows like ChatGPT-style apps that need both structured UI responses and tool calling capabilities. Fixes langchain-ai#28848 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Remove quoted type annotations - Fix line length violations - Remove trailing whitespace - Use double quotes consistently - Improve error message formatting for better readability The S110 warnings about try-except-pass are intentional - we want silent fallback behavior before raising the final helpful error.
…ain-ai#32169) ## **Description:** This PR updates the internal documentation link for the RAG tutorials to reflect the updated path. Previously, the link pointed to the root `/docs/tutorials/`, which was generic. It now correctly routes to the RAG-specific tutorial page for the following text-embedding models. 1. DatabricksEmbeddings 2. IBM watsonx.ai 3. OpenAIEmbeddings 4. NomicEmbeddings 5. CohereEmbeddings 6. MistralAIEmbeddings 7. FireworksEmbeddings 8. TogetherEmbeddings 9. LindormAIEmbeddings 10. ModelScopeEmbeddings 11. ClovaXEmbeddings 12. NetmindEmbeddings 13. SambaNovaCloudEmbeddings 14. SambaStudioEmbeddings 15. ZhipuAIEmbeddings ## **Issue:** N/A ## **Dependencies:** None ## **Twitter handle:** N/A
- Replace broad Exception catching with specific exceptions (AttributeError, TypeError, ValueError) - Add proper type annotations to test functions and variables - Add type: ignore comments for dynamic method assignment in tests - Fix line length violations and formatting issues - Ensure all MyPy checks pass All lint checks now pass successfully. The S110 warnings are resolved by using more specific exception handling instead of bare try-except-pass.
getting the latest changes
- Remove test_bind_tools_fix.py - Remove test_real_example.py - Remove test_sequence_bind_tools.py These test files were created during development but should not be in the root directory. The actual fix for issue langchain-ai#28848 (RunnableSequence.bind_tools) is already implemented in core.
pulling from the updated branch
- Add fallback mechanism in _create_chat_result to handle cases where OpenAI client's model_dump() returns choices as None even when the original response object contains valid choices data - This resolves TypeError: 'Received response with null value for choices' when using vLLM with LangChain-OpenAI integration - Add comprehensive test suite to validate the fix and edge cases - Maintain backward compatibility for cases where choices are truly unavailable - Fix addresses GitHub issue langchain-ai#32252 The issue occurred because some OpenAI-compatible APIs like vLLM return valid response objects, but the OpenAI client library's model_dump() method sometimes fails to properly serialize the choices field, returning None instead of the actual choices array. This fix attempts to access the choices directly from the response object when model_dump() fails.
- Add fallback mechanism in _create_chat_result to handle cases where OpenAI client's model_dump() returns choices as None even when the original response object contains valid choices data - This resolves TypeError: 'Received response with null value for choices' when using vLLM with LangChain-OpenAI integration - Add comprehensive test suite to validate the fix and edge cases - Maintain backward compatibility for cases where choices are truly unavailable - Fix addresses GitHub issue langchain-ai#32252 The issue occurred because some OpenAI-compatible APIs like vLLM return valid response objects, but the OpenAI client library's model_dump() method sometimes fails to properly serialize the choices field, returning None instead of the actual choices array. This fix attempts to access the choices directly from the response object when model_dump() fails.
fix(openai): resolve vLLM compatibility issue with ChatOpenAI (langchain-ai#32252) More details can be read on this thread.
Fixes langchain-ai#32671 by modifying AgentExecutor to properly propagate RunnableConfig through the entire execution chain to tools. Changes: - Enhanced Chain.invoke() and Chain.ainvoke() to detect and pass config parameter - Updated AgentExecutor._call() and _acall() to accept config parameter - Modified all intermediate methods to propagate config: _take_next_step, _atake_next_step, _iter_next_step, _aiter_next_step, _perform_agent_action, _aperform_agent_action - Added config parameter to tool.run() and tool.arun() calls in both sync and async paths - Added comprehensive test case to verify config propagation works correctly The fix ensures tools receive the RunnableConfig parameter instead of None, enabling proper configuration-aware tool execution in AgentExecutor workflows.
- Fixed line length issue in Chain.ainvoke() - Updated abstract method signatures to include config parameter - Added proper documentation for config parameter in docstrings
The latest updates on your projects. Learn more about Vercel for GitHub. |
CodSpeed WallTime Performance ReportMerging #32773 will not alter performanceComparing
|
CodSpeed Instrumentation Performance ReportMerging #32773 will not alter performanceComparing Summary
|
….9+ compatibility
AgentExecutor
Description
This PR fixes issue #32671 where
RunnableConfig
was not being passed to tools when usingAgentExecutor
. Tools were receivingNone
instead of the proper configuration object, preventing them from accessing important context like session IDs, tags, and metadata.Problem
When using
AgentExecutor
with tools that accept aconfig
parameter, the tools would always receiveNone
instead of theRunnableConfig
passed to the executor. This prevented configuration-aware tool execution and limited the ability to maintain context across tool calls.Before:
After:
Changes
1. Enhanced Chain Base Class (
chains/base.py
)invoke()
andainvoke()
methods to detect config parameter support usinginspect.signature
_call()
and_acall()
to include optionalconfig
parameter2. Updated AgentExecutor (
agents/agent.py
)config
parameter to all methods in the execution call chain:_call()
and_acall()
_take_next_step()
and_atake_next_step()
_iter_next_step()
and_aiter_next_step()
_perform_agent_action()
and_aperform_agent_action()
tool.run()
andtool.arun()
Testing
Impact
Example Use Cases
This fix enables important use cases like:
Checklist
Fixes #32671