Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: AzureChatCompletion service fails with "Missing required parameter" error #10317

Open
anu43 opened this issue Jan 28, 2025 · 6 comments
Assignees
Labels
agents python Pull requests for the Python Semantic Kernel

Comments

@anu43
Copy link

anu43 commented Jan 28, 2025

Description

When running a group chat between a critic (ChatCompletionAgent) and a data scientist (AzureAssistantAgent), the AzureChatCompletion service fails with a "Missing required parameter" error.

Error Message

("<class 'semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {'error': {'message': "Missing required parameter: 'messages[5].content[0].type'.", 'type': 'invalid_request_error', 'param': 'messages[5].content[0].type', 'code': 'missing_required_parameter'}}'))

Relevant Code

# Selection config
SELECTION_PROMPT: str = f"""Determine which participant takes the next turn in a conversation based on the the most recent participant.
State only the name of the participant to take the next turn.
No participant should take more than one turn in a row.

Choose only from these participants:
- {CRITIC_NAME}
- {DS_NAME}

Always follow these rules when selecting the next participant:
- After user input, it is {DS_NAME}'s turn.
- After {DS_NAME} replies, it is {CRITIC_NAME}'s turn.
- After {CRITIC_NAME} provides feedback, it is {DS_NAME}'s turn.

History:
{{{{$history}}}}"""

# Termination config
TERMINATION_KEYWORD = "yes"

TERMINATION_PROMPT: str = f"""Examine the RESPONSE and determine whether the data scientist has met the targeted performance metric based on the critic's feedback.
If the critic provides specific suggestions for improvement, the results are not satisfactory.
If no correction is suggested, the results are satisfactory.

When the results are satisfactory, respond with a single word without explanation: {TERMINATION_KEYWORD}.

RESPONSE:
{{{{$history}}}}"""

# Init the group chat
chat = AgentGroupChat(
    agents=[ds_scientist, critic],
    selection_strategy=KernelFunctionSelectionStrategy(
        function=selection_function,
        kernel=_create_kernel_with_chat_completion("selection"),
        result_parser=lambda result: (
            str(result.value[0]) if result.value is not None else ds_scientist
        ),
        agent_variable_name="agents",
        history_variable_name="history",
    ),
    termination_strategy=KernelFunctionTerminationStrategy(
        agents=[critic],
        function=termination_function,
        kernel=_create_kernel_with_chat_completion("termination"),
        result_parser=lambda result: TERMINATION_KEYWORD in str(result.value[0]).lower(),
        history_variable_name="history",
        maximum_iterations=10,
    ),
)

Steps to Reproduce

  • Set up a group chat with a ChatCompletionAgent and an AzureAssistantAgent
  • Configure the chat with KernelFunctionSelectionStrategy and KernelFunctionTerminationStrategy
  • Attempt to run the group chat
try:
        # Loop variables
        is_complete: bool = False  # Whether to complete the conversation
        file_ids: list[str] = []  # File ids created by the assistant to track

        # User message
        USER_MSG: str = "A brief introduction to the dataset with some visual aids."
        print("TASK:", USER_MSG)
        # Add the user message to the history
        await chat.add_chat_message(
            message=ChatMessageContent(role=AuthorRole.USER, content=USER_MSG),
        )

        while not is_complete:
            # Start the groupchat
            is_code: bool = False
            async for response in chat.invoke():
                # Whether the content is a code snippet
                if is_code != response.metadata.get("code"):
                    print()
                    print(f"{'-'* 10} CODE {'-'* 10}\n")
                    is_code = not is_code

                # Print the response
                print(
                    f"# {response.role} - {response.name or '*'}: '{response.content}'"
                )

                # Collect the file ids
                file_ids.extend(
                    [
                        item.file_id
                        for item in response.items
                        if isinstance(item, FileReferenceContent)
                    ]
                )

                print()

                # Download any image created by the Azure Assistant
                await download_response_image(ds_scientist, file_ids)
                # Then clear the list for the subsequent session
                file_ids.clear()

            # Whether the chat ended
            if chat.is_complete:
                is_complete = True

    except Exception as e:
        # Trace the error
        print(e)

    finally:
        # Clean up resources
        print("Cleaning up Azure resources...")
        if ds_scientist is not None:
            await _clean_up_resources(agent=ds_scientist, thread_id=ds_thread_id)

Expected Behavior

  • The group chat should run without errors, allowing the agents to communicate and complete the task.

Actual Behavior

  • The AzureChatCompletion service fails with a "Missing required parameter" error, specifically mentioning 'messages[5].content[0].type'. I believe the error occurs during internal group chat conversations when the turn is held by the critic.
@markwallace-microsoft markwallace-microsoft added python Pull requests for the Python Semantic Kernel triage labels Jan 28, 2025
@moonbox3 moonbox3 self-assigned this Jan 28, 2025
@moonbox3 moonbox3 added agents and removed triage labels Jan 28, 2025
@moonbox3
Copy link
Contributor

Are you able to turn on debug logging and get the payload you're sending before the 400 occurs?

@anu43
Copy link
Author

anu43 commented Jan 29, 2025

Unfortunately, I wasn't able to debug the logging because all events occur within the AgentGroupChat. Could you suggest a method that would allow me to observe and share the payload with you?

@moonbox3
Copy link
Contributor

You can add this to your script that instantiates the group chat:

import logging

logging.basicConfig(level=logging.DEBUG)

Please be careful to replace any PII (resource names? endpoints?) so you don't communicate personal data/info.

@anu43
Copy link
Author

anu43 commented Jan 29, 2025

Thanks. I thought I had to manage background debugging, but this process revealed everything happening under the hood. I believe the user message arrives and is managed without any issue; however, problems start after the chat bounces to the critic. Unfortunately, I still couldn't figure out what the main problem is.

P.S.: As soon as I substitute the AzureAssistantAgent with a ChatCompletionAgent, the issue goes away. Of course, no code executions—just a chat happens.

2025-01-29 11:09:49,926 - semantic_kernel.functions.kernel_function - INFO - Function selection succeeded.
2025-01-29 11:09:49,926 - semantic_kernel.functions.kernel_function - DEBUG - Function result: critic
2025-01-29 11:09:49,926 - semantic_kernel.functions.kernel_function - INFO - Function completed. Duration: 0.794336s
2025-01-29 11:09:49,927 - semantic_kernel.agents.group_chat.agent_chat - INFO - Invoking agent critic
2025-01-29 11:09:49,927 - semantic_kernel.agents.chat_completion.chat_completion_agent - DEBUG - [ChatCompletionAgent] Invoking AzureChatCompletion.
2025-01-29 11:09:49,939 - openai._base_client - DEBUG - Request options: {
    'method': 'post',
    'url': '/chat/completions',
    'headers': {'api-key': '<redacted>'},
    'json_data': {
        'messages': [
            {'role': 'system', 'content': 'You are an AI expert...', 'name': 'critic'},
            {'role': 'user', 'content': 'A brief introduction to the dataset with some visual aids.'},
            {'role': 'assistant', 'content': "import pandas as pd\n\n# Load the dataset\nfile_path = '<redacted_file_path>'\ndata = pd.read_csv(file_path)\n\n# Display basic information about the dataset\ndata_info = data.info()\ndata_head = data.head()\n\ndata_info, data_head", 'name': 'data-scientist'},
            {'role': 'assistant', 'content': 'The dataset consists of 891 entries with 12 columns...', 'name': 'data-scientist'},
            {'role': 'assistant', 'content': 'import matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Set the style for the plots\nsns.set(style="whitegrid")\n\n# Plot the distribution of passengers by class\n...', 'name': 'data-scientist'},
            {'role': 'assistant', 'content': [
                {'file_id': '<redacted_file_id_1>'},
                {'file_id': '<redacted_file_id_2>'},
                {'file_id': '<redacted_file_id_3>'},
                {'type': 'text', 'text': 'The visualizations provide insights into the dataset...'}
            ], 'name': 'data-scientist'}
        ],
        'model': '<redacted_model_id>',
        'stream': False
    }
}
2025-01-29 11:09:50,485 - httpx - INFO - HTTP Request: POST https://<redacted_azure_endpoint>/chat/completions?api-version=2024-09-01-preview "HTTP/1.1 400 model_error"
2025-01-29 11:09:50,486 - openai._base_client - DEBUG - HTTP Request: POST https://<redacted_azure_endpoint>/chat/completions?api-version=2024-09-01-preview "400 model_error"
2025-01-29 11:09:50,487 - openai._base_client - DEBUG - Encountered httpx.HTTPStatusError
Traceback (most recent call last):
  File ".../openai/_base_client.py", line 1623, in _request
    response.raise_for_status()
  File ".../httpx/_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 model_error' for url 'https://<redacted_azure_endpoint>/chat/completions?api-version=2024-09-01-preview'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

2025-01-29 11:09:50,490 - __main__ - ERROR - An error occurred during the chat execution.
Traceback (most recent call last):
  File ".../openai_handler.py", line 87, in _send_completion_request
    response = await self.client.chat.completions.create(**settings_dict)
  ...
openai.BadRequestError: Error code: 400 - {'error': {'message': "Missing required parameter: 'messages[5].content[0].type'.", 'type': 'invalid_request_error', 'param': 'messages[5].content[0].type', 'code': 'missing_required_parameter'}}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File ".../my_script.py", line 257, in main
    async for response in chat.invoke():
  File ".../agent_group_chat.py", line 144, in invoke
    async for message in super().invoke_agent(selected_agent):
  ...
semantic_kernel.exceptions.service_exceptions.ServiceResponseException: ("<class '...AzureChatCompletion'> service failed to complete the prompt", BadRequestError('Error code: 400 - {\'error\': {...}}'))

2025-01-29 11:09:50,494 - __main__ - INFO - Cleaning up Azure resources...
2025-01-29 11:09:51,540 - __main__ - DEBUG - Cleaned up resources for Azure Assistant.

@moonbox3
Copy link
Contributor

Thanks for your response. Looks like it has to do with:

{'role': 'assistant', 'content': [
        {'file_id': '<redacted_file_id_1>'},
        {'file_id': '<redacted_file_id_2>'},
        {'file_id': '<redacted_file_id_3>'},
        {'type': 'text', 'text': 'The visualizations provide insights into the dataset...'}
    ], 'name': 'data-scientist'
}

We currently allow adding FunctionResultContent to the chat between the agents. If I am understanding correctly, your data-scientist AzureAssistantAgent created content with file_id references and it's now calling your critic (ChatCompletionAgent). Does that flow sound right?

Let me look into this further.

@anu43
Copy link
Author

anu43 commented Jan 29, 2025

That's right @moonbox3 I insert the titanic.csv dataset with a request for basic intro to the dataset with some visual aids. Then, agent covers everything and returns the charts. In this scenario, the critic should provide some feedback on the response considering the task from the user. The context is simple by design for sure but my intention is to observe the communication between a ChatCompletionAgent and a AzureAssistantAgent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agents python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants