Skip to content

Enabling async agent creation and using async agent invocation #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

uesleilima
Copy link

@uesleilima uesleilima commented May 21, 2025

PR: Major Refactor – Native Async APIs, Dependency Upgrades, and Adapter Modernization

Overview

Implementation of Issue #45

The main driver for this change is to support native async/await throughout the API, greatly increasing flexibility, performance, and alignment with the latest patterns in the Langchain and FastAPI ecosystems.


Key Changes

1. Full Async Support for Agent Lifecycle and APIs

  • Introduced async-first APIs for agent lifecycle (acreate_agent, ainvoke, etc.), FastAPI routers, and key integration points.
  • Every FastAPI endpoint & router now natively supports async execution.
  • Agent factories, adapters, and model bridges leverage async execution, making it easier to work with streaming, real-time LLMs, and scalable deployments.

2. Dependency and Model Adapter Modernization

  • Upgraded dependencies to the latest major and minor versions for strong compatibility with modern LLM/client packages.
    • New minimums: langchain, openai, langchain-openai/antrhopic, fastapi etc.
  • Refactored model adapters for LlamaCpp, and more, using new import paths and proper async support.

3. Consistent Prompt Handling and Factory APIs

  • Updated react agent construction to use new parameter (from messages_modifier to prompt)

4. Streaming and Real-Time Features

  • Streaming support has been improved, including async iterators and SSE/streaming endpoint compatibility, which is essential for modern chat and LLM multi-turn applications.

5. Dependency Pinning & Poetry Lock Refresh

  • All dependencies in pyproject.toml and poetry.lock were reviewed and updated to ensure maximum compatibility with async frameworks and modern LLM model packages.
  • Superfluous or outdated development packages have been cleaned up, contributing to lighter and more reliable CI/CD and deployment.

6. Example Factories, Test Suites, and Documentation Updates

  • All examples and test agent factories have been updated for async signatures and new model/adapters.
  • In-code and README docs now detail how to leverage async and streaming patterns in downstream/codegen integrations.

Motivation

  • Adopt async-first patterns: Allows this backend and its users to work efficiently with concurrent, streaming, or long-running LLM tasks—a must for modern API usage and real-time AI applications.
  • Modernize for the latest Langchain/OpenAI/Anthropic/LLM adapters: Ensures out-of-the-box compatibility with the fastest-moving open-source LLM landscape.
  • Improve maintainability: Clearer separation of concerns, more explicit factory/adapter/subclass patterns, and tighter test/CI ensure future-proofing and easier onboarding for new contributors.

Migration/Breaking Changes

  • All agent and API lifecycle hooks are now async. Downstream consumers must use await and async FastAPI endpoints when customizing.
  • Prompt & agent factory signatures are now consistently async; see new examples for idiomatic usage.
  • Sync/legacy interfaces are not supported in this version. Migration is required for downstream consumers.

Compatibility

  • This PR is a major, breaking upgrade for consumers of the former synchronous-only codebase.
  • Strongly recommended for new projects, or those planning to run with modern Langchain/FastAPI/LLM stacks and requiring scalable, async/streaming support.

Thank you for considering this major upgrade—this PR sets up the project and its users for scalable, concurrent, and forward-looking LLM API deployments!

@benjaminvdb
Copy link

Great job! It would be wonderful to see this merged ❤️

if self.is_async:
return await self.fn(dto)
else:
return self.fn(dto)
Copy link
Owner

@samuelint samuelint Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run poetry run flake8 langchain_openai_api_bridge tests
langchain_openai_api_bridge/core/function_agent_factory.py:24:32: W292 no newline at end of file
Error: Process completed with exit code 1.

https://github.com/samuelint/langchain-openai-api-bridge/actions/runs/15173008248/job/44220270619

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants