Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Groq API incompatible with smolagents system messages when using OpenAI endpoint via LiteLLMModel #429

Open
RonanKMcGovern opened this issue Jan 30, 2025 · 4 comments
Labels
bug Something isn't working

Comments

@RonanKMcGovern
Copy link

Describe the bug
A clear and concise description of what the bug is.

Code to reproduce the error

from smolagents import CodeAgent, LiteLLMModel
import os

# Initialize the model with Groq
model = LiteLLMModel(
    "openai/deepseek-r1-distill-llama-70b",
    api_base="https://api.groq.com/openai/v1",
    api_key=os.getenv("GROQ_API_KEY")
)

# Create a minimal agent
agent = CodeAgent(
    tools=[],  # No tools needed to demonstrate the issue
    model=model,
    add_base_tools=False,
    verbosity_level=2
)

# Try to run a simple task - this will fail due to system message issues
try:
    result = agent.run("Say hello!")
    print(result)
except Exception as e:
    print(f"Error: {str(e)}") 

Error logs (if any)

uv run test-scripts/groq_system_message_error.py
/Users/ronanmcgovern/TR/ADVANCED-inference/agentic-rag/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
  warnings.warn(message, UserWarning)
╭────────────────────────────────────────────────── New run ──────────────────────────────────────────────────╮
│                                                                                                             │
│ Say hello!                                                                                                  │
│                                                                                                             │
╰─ LiteLLMModel - openai/deepseek-r1-distill-llama-70b ───────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 0: Duration 0.32 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 1: Duration 0.23 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 2: Duration 0.62 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 3: Duration 0.24 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 4: Duration 0.78 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 5: Duration 0.25 seconds]
Reached max steps.

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.

Final answer: Error in generating final LLM output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 
'invalid_request_error'}}
[Step 6: Duration 0.25 seconds]
Error in generating final LLM output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 'invalid_request_error'}}

Expected behavior
Should run as any other litellm model.

Packages version:
Run pip freeze | grep smolagents and paste it here:

smolagents==1.6.0

Additional context
I ran the same tests on groq using only litellm (i.e. not as an agent) and there is no issue handling system prompts.

@RonanKMcGovern RonanKMcGovern added the bug Something isn't working label Jan 30, 2025
@elsolo5000-2
Copy link

Try this https://github.com/elsolo5000-2/smolagents/blob/main/src/smolagents/models.py
And use flatten_message_as_string=true (or something similar. See in the file) while creating the model

@seanphan
Copy link

seanphan commented Feb 4, 2025

We have the same issue

@MichaelisTrofficus
Copy link

Same here!

@vqndev
Copy link

vqndev commented Feb 7, 2025

Try this https://github.com/elsolo5000-2/smolagents/blob/main/src/smolagents/models.py And use flatten_message_as_string=true (or something similar. See in the file) while creating the model

I tried this and it didn't work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants