Skip to content

Conversation

@KaparthyReddy
Copy link

Description

Fixes #32252 - Resolves the issue where LangChain-OpenAI throws TypeError: Received response with null value for choices when using vLLM's OpenAI-compatible API.

Problem

When using vLLM's OpenAI-compatible API through LangChain, the response parsing would fail with a null choices error, even though the choices field exists in the raw response. This was caused by how the OpenAI SDK's model_dump() method handles non-OpenAI API responses.

Solution

This PR improves response handling in the _create_chat_result method:

  1. Robust response serialization: Added fallback logic that tries model_dump_json() if model_dump() fails
  2. Better error messages: Enhanced the null choices error to include:
    • Available response keys
    • Response object type
    • Hints about API compatibility issues
  3. Comprehensive tests: Added tests for:
    • vLLM-style responses with valid choices
    • Improved error message content

Changes

  • Modified langchain_openai/chat_models/base.py:
    • Added try-except block with fallback serialization (lines 1271-1287)
    • Improved error message with debugging info (lines 1302-1309)
  • Added tests in tests/unit_tests/chat_models/test_base.py:
    • test_vllm_response_with_valid_choices() - Verifies vLLM responses work
    • test_improved_null_choices_error_message() - Validates error message improvements

Testing

All existing tests pass (279 passed, 4 xfailed, 1 xpassed).

New tests verify:

  • vLLM-style responses are handled correctly
  • Error messages provide useful debugging information

Usage Example

After this fix, users can successfully use vLLM endpoints:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    api_key="your-runpod-key",
    base_url="https://api.runpod.ai/v2/<endpoint-id>/openai/v1",
    model="your-model"
)

# This will now work without null choices error
response = llm.invoke("Hello!")

Resolves #32252

Kaparthy Reddy added 2 commits October 18, 2025 20:26
- Add strict parameter to ChatDeepSeek class
- Switch to Beta API endpoint when strict mode is enabled
- Override bind_tools method to add strict: true to tool definitions
- Add comprehensive tests for strict mode functionality

Resolves langchain-ai#32670
- Add robust fallback for response serialization when model_dump() fails
- Use model_dump_json() as fallback for non-OpenAI API responses
- Improve null choices error message with debugging information
- Add tests for vLLM-style responses and improved error messages

Fixes langchain-ai#32252
@github-actions github-actions bot added integration Related to a provider partner package integration fix labels Oct 24, 2025
@codspeed-hq
Copy link

codspeed-hq bot commented Oct 24, 2025

CodSpeed Performance Report

Merging #33662 will not alter performance

Comparing KaparthyReddy:fix/vllm-null-choices-error (663d485) with master (f174295)

Summary

✅ 6 untouched
⏩ 28 skipped1

Footnotes

  1. 28 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fix integration Related to a provider partner package integration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LangChain-OpenAI raises error due to null choices when using vLLM OpenAI-compatible API

1 participant