You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A clear and concise description of what the bug is.
Code to reproduce the error
from smolagents import CodeAgent, LiteLLMModel
import os
# Initialize the model with Groq
model = LiteLLMModel(
"openai/deepseek-r1-distill-llama-70b",
api_base="https://api.groq.com/openai/v1",
api_key=os.getenv("GROQ_API_KEY")
)
# Create a minimal agent
agent = CodeAgent(
tools=[], # No tools needed to demonstrate the issue
model=model,
add_base_tools=False,
verbosity_level=2
)
# Try to run a simple task - this will fail due to system message issues
try:
result = agent.run("Say hello!")
print(result)
except Exception as e:
print(f"Error: {str(e)}")
Error logs (if any)
uv run test-scripts/groq_system_message_error.py
/Users/ronanmcgovern/TR/ADVANCED-inference/agentic-rag/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
warnings.warn(message, UserWarning)
╭────────────────────────────────────────────────── New run ──────────────────────────────────────────────────╮
│ │
│ Say hello! │
│ │
╰─ LiteLLMModel - openai/deepseek-r1-distill-llama-70b ───────────────────────────────────────────────────────╯
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 0 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 0: Duration 0.32 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 1: Duration 0.23 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 2: Duration 0.62 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 3: Duration 0.24 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 4: Duration 0.78 seconds]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Error in generating model output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 5: Duration 0.25 seconds]
Reached max steps.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
Final answer: Error in generating final LLM output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for
'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type':
'invalid_request_error'}}
[Step 6: Duration 0.25 seconds]
Error in generating final LLM output:
litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "'messages.0' : for 'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 'invalid_request_error'}}
Expected behavior
Should run as any other litellm model.
Packages version:
Run pip freeze | grep smolagents and paste it here:
smolagents==1.6.0
Additional context
I ran the same tests on groq using only litellm (i.e. not as an agent) and there is no issue handling system prompts.
The text was updated successfully, but these errors were encountered:
Describe the bug
A clear and concise description of what the bug is.
Code to reproduce the error
Error logs (if any)
Expected behavior
Should run as any other litellm model.
Packages version:
Run
pip freeze | grep smolagents
and paste it here:Additional context
I ran the same tests on groq using only litellm (i.e. not as an agent) and there is no issue handling system prompts.
The text was updated successfully, but these errors were encountered: