Skip to content

LiteLLM - Adjusting tool call response for models that expect an iterable. #132

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

elliott-marut
Copy link

Summary

This PR modifies the LiteLLM tool call for external models to accommodate models that require an iterable type. Tested specifically with Qwen 2.5 models, but issue applies to others.

Changes

Changed tool_calls empty default from None to [] in models/lite_llm.py.

Benefits

Ensures tool_calls returns an iterable type. This prevents an odd corner case error in which messages continue as expected in the chat, until the follow up to a message with an empty tool call introducing a None type into the history. This leads to the following error:
ERROR - fast_api.py:616 - Error in event_generator: litellm.BadRequestError: OpenAIException - 'NoneType' object is not iterable

Copy link

google-cla bot commented Apr 12, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@binnn6
Copy link

binnn6 commented Apr 14, 2025

I think not adk's bug. As tool_calls type is (function) tool_calls: List[ChatCompletionAssistantToolCall] | None, errors still occur with LiteLLM alone.

@elliott-marut
Copy link
Author

I think not adk's bug. As tool_calls type is (function) tool_calls: List[ChatCompletionAssistantToolCall] | None, errors still occur with LiteLLM alone.

Not adk's bug, but patches to LiteLLM will be overwritten in the final response by the None return in adk.

@binnn6
Copy link

binnn6 commented Apr 15, 2025

litellm.llms.openai.common_utils.OpenAIError: {"error":{"message":"Invalid 'messages[2].tool_calls': empty array. Expected an array with minimum length 1, but got an empty array instead.","type":"invalid_request_error","param":null,"code":"invalid_request_error"}

This is fundamentally an issue of protocol compatibility between different models. For instance, if you switch to [], other models like DeepSeek might encounter errors as shown above. The optimal solution lies in implementing protocol compatibility at the model layer rather than addressing one compatibility issue only to introduce new conflicts elsewhere.

@elliott-marut
Copy link
Author

litellm.llms.openai.common_utils.OpenAIError: {"error":{"message":"Invalid 'messages[2].tool_calls': empty array. Expected an array with minimum length 1, but got an empty array instead.","type":"invalid_request_error","param":null,"code":"invalid_request_error"}

This is fundamentally an issue of protocol compatibility between different models. For instance, if you switch to [], other models like DeepSeek might encounter errors as shown above. The optimal solution lies in implementing protocol compatibility at the model layer rather than addressing one compatibility issue only to introduce new conflicts elsewhere.

Good call, thank you for the DeepSeek test. I'll close this pull request as it isn't the right solution.

For anyone who also ends up searching this error, I think this stems from my VLLM deployment of the Qwen model. I'm using a monkey patch conditional on env variables for now to adjust the ADK as shown. That will be a temporary solution while the VLLM structure is considered. https://github.com/vllm-project/vllm/blob/54a66e5fee4a1ea62f1e4c79a078b20668e408c6/vllm/entrypoints/chat_utils.py#L1072

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants