Skip to content

feat: add support for multiple LLM providers#11

Open
ImBIOS wants to merge 1 commit intoconcierge-hq:mainfrom
ImBIOS:feat/multi-provider-support
Open

feat: add support for multiple LLM providers#11
ImBIOS wants to merge 1 commit intoconcierge-hq:mainfrom
ImBIOS:feat/multi-provider-support

Conversation

@ImBIOS
Copy link
Copy Markdown
Contributor

@ImBIOS ImBIOS commented Dec 1, 2025

Summary

This PR adds support for multiple LLM providers (OpenAI and Anthropic) by introducing a provider abstraction layer.

Changes

  • Add abstract base class
  • Implement and
  • Refactor to use provider abstraction
  • Add provider factory for instantiation
  • Add dependency to
  • Add unit tests for provider abstraction

Usage

# Use OpenAI (default)
python src/concierge_clients/client_tool_calling.py <api_base> <api_key>

# Use Anthropic
python src/concierge_clients/client_tool_calling.py <api_base> <api_key> --provider anthropic --model claude-3-opus-20240229

Related:

- Add abstract LLMProvider base class
- Implement OpenAIProvider and AnthropicProvider
- Refactor ToolCallingClient to use provider abstraction
- Add provider factory for instantiation
- Add anthropic dependency to pyproject.toml
- Add unit tests for provider abstraction

Closes concierge-hq#7
@harikapadia999
Copy link
Copy Markdown

Comprehensive Code Review: Multi-Provider LLM Support

Excellent work on implementing multi-provider support! This is a significant architectural improvement that addresses issue #7. The abstraction layer is well-designed and follows solid software engineering principles.

🌟 Major Strengths:

  1. Clean Architecture: The provider factory pattern with base abstraction is textbook-perfect
  2. Extensibility: Adding new providers (Gemini, Cohere, etc.) will be straightforward
  3. Backward Compatibility: Existing OpenAI code paths remain functional
  4. Comprehensive Testing: The test file covers key scenarios

🔍 Detailed Analysis:

1. Provider Abstraction (base.py)

The LLMProvider abstract base class is well-structured with three core methods:

  • chat() - Provider-specific API calls
  • convert_tools() - Tool format conversion
  • parse_response() - Response normalization

Suggestion: Consider adding a validate_config() method to check API keys and model availability at initialization.

2. OpenAI Provider (openai_provider.py)

✅ Clean implementation with proper tool conversion
✅ Handles both Concierge and OpenAI tool formats

Minor Issue: The default model is gpt-5 which doesn't exist yet. Consider:

def __init__(self, api_base: str, api_key: str, model: str = "gpt-4-turbo"):
    # gpt-5 doesn't exist; use gpt-4-turbo or gpt-4o as default

3. Anthropic Provider (anthropic_provider.py)

✅ Excellent handling of Anthropic's unique message format
✅ Proper system prompt extraction
✅ Tool result conversion is well-implemented

Potential Issue: Line 45-46 in tool call handling:

for tc in msg["tool_calls"]:
    content.append({
        "type": "tool_use",
        "id": tc["id"],
        "name": tc["function"]["name"],
        "input": json.loads(tc["function"]["arguments"])  # ⚠️ Could fail if arguments is already a dict
    })

Safer approach:

args = tc["function"]["arguments"]
input_data = json.loads(args) if isinstance(args, str) else args

4. Client Integration (client_tool_calling.py)

✅ Seamless integration with minimal changes to existing code
✅ Provider initialization logic is clean

Observation: The tool name extraction on line 136 is clever:

"tools": [t.get("function", {}).get("name") or t.get("name") for t in self.current_tools]

This handles both OpenAI and Anthropic formats elegantly.

Suggestion: Add provider validation:

def __init__(self, api_base: str, api_key: str, provider_name: str = "openai", model: str = None, verbose: bool = False):
    supported_providers = ["openai", "anthropic"]
    if provider_name not in supported_providers:
        raise ValueError(f"Unsupported provider: {provider_name}. Supported: {supported_providers}")
    # ... rest of init

5. Testing (test_provider_abstraction.py)

✅ Good coverage of factory, initialization, and tool conversion
✅ Proper mocking of external dependencies

Enhancement Opportunity: Add integration tests with actual API calls (optional, behind environment flags):

@pytest.mark.integration
@pytest.mark.skipif(not os.getenv("OPENAI_API_KEY"), reason="No API key")
def test_openai_real_call():
    # Test with real API
    pass

🐛 Critical Issues to Address:

1. Missing Closing Parenthesis in anthropic_provider.py (Line ~60)
The kwargs dict construction might be missing proper closure. Verify syntax.

2. Error Handling
Add try-except blocks around provider initialization:

try:
    self.llm_provider = get_provider(provider_name, **provider_kwargs)
except ImportError as e:
    raise ImportError(f"Provider '{provider_name}' requires additional dependencies: {e}")
except Exception as e:
    raise RuntimeError(f"Failed to initialize provider '{provider_name}': {e}")

3. Type Hints
Consider adding more specific type hints:

from typing import Literal

ProviderName = Literal["openai", "anthropic"]

def __init__(self, ..., provider_name: ProviderName = "openai", ...):

📋 Recommendations:

Priority 1 (Before Merge):

  • Fix the gpt-5 model name to an existing model
  • Add provider validation in __init__
  • Handle json.loads() edge case in Anthropic provider
  • Add error handling for provider initialization

Priority 2 (Future Enhancements):

  • Add provider-specific configuration validation
  • Implement retry logic with exponential backoff
  • Add streaming support for both providers
  • Create provider-specific error classes
  • Add logging for provider selection and API calls

🎯 Testing Checklist:

Before merging, please verify:

  • OpenAI provider works with existing workflows
  • Anthropic provider handles all message types correctly
  • Tool calling works for both providers
  • Error messages are clear when providers fail
  • Documentation is updated with provider usage examples

📚 Documentation Needs:

Please add to README:

## Multi-Provider Support

Concierge now supports multiple LLM providers:

### OpenAI
\`\`\`python
client = ToolCallingClient(
    api_base="https://api.openai.com/v1",
    api_key="sk-...",
    provider_name="openai",
    model="gpt-4-turbo"
)
\`\`\`

### Anthropic Claude
\`\`\`python
client = ToolCallingClient(
    api_base="",  # Not used for Anthropic
    api_key="sk-ant-...",
    provider_name="anthropic",
    model="claude-3-opus-20240229"
)
\`\`\`

🚀 Conclusion:

This is excellent work that significantly improves Concierge's flexibility! The architecture is solid and the implementation is clean. Address the critical issues above, and this will be ready to merge.

Estimated Impact: This PR resolves issue #7 and opens the door for supporting Gemini, Cohere, and other providers in the future.

Great job, @ImBIOS! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants