-
Notifications
You must be signed in to change notification settings - Fork 278
Add Azure OpenAI Responses provider with deployment-aware model mapping #890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Azure OpenAI Responses provider with deployment-aware model mapping #890
Conversation
| return (h2 >>> 0).toString(36) + (h1 >>> 0).toString(36); | ||
| } | ||
|
|
||
| const DEFAULT_AZURE_API_VERSION = "2025-04-01-preview"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm honestly not sure about this. Azure's documentation is extremely confusing, and I haven't been able to find any clear resources on this.
Here's one: https://learn.microsoft.com/en-us/azure/ai-foundry/openai/api-version-lifecycle
But when I try setting the path /openai/v1 as the base URL and removing the API version, I just get an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is now much better: 9f0b2fe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also be fairly consistent with how Vercel AI SDK does this.
|
This might fix #886 as well. |
|
We are duplicating a lot of code between openai-reponses.ts, openai-responses-codex.ts and this new responses implementation. Do you have the time to fix that up by adding an openai-responses-shared.ts, that pulls in the the shared functions where possible? If not, this PR will have to wait until after the great refactor. |
I'll see if I can continue working on this a bit tomorrow. Extracting the shared functions between openai-responses.ts and azure-openai-responses.ts should be fairly straightforward since they use the same OpenAI SDK client. Do you think it would be enough to share as much as possible between these two, or do you want to unify the openai-responses-codex.ts provider at the same time before merging this? That one seems to use a custom raw fetch implementation against the API, and has some ChatGPT-specific checks in place. |
|
@markusylisiurunen if we can share the SDK code for all the responses impls (openai, openai-codex, azure), that'd be ideal. I have not yet looked into the overlap for actual endpoint calls between SDK and the fetch impl. My guess is there's not much of a difference. |
|
@badlogic Yeah I agree, I just need to stop being lazy. I'll continue on this later today. |
f64b9d4 to
8945a69
Compare
8945a69 to
80cdc95
Compare
|
@badlogic How does this look to you now? I ran the test suite with Azure OpenAI, OpenAI, and Codex OAuth; all passed. Also tested with |
# Conflicts: # packages/ai/CHANGELOG.md # packages/ai/src/providers/openai-responses.ts
|
Merged to main via rebase. |
Added
azure-openai-responsesusing the OpenAI SDK’sAzureOpenAIclient, with Azure auth + endpoint/resource resolution and optional deployment overrides. We clone OpenAI Responses models into an Azure provider during generation and setbaseUrl: ""so runtime requires an Azure endpoint or resource name, which avoids accidental OpenAI defaults. Deployment names are treated as model IDs by default, butazureDeploymentNameorAZURE_OPENAI_DEPLOYMENT_NAMElets you override when your deployment names differ. Tests and docs were updated across ai and coding-agent.