Skip to content

backport v6#5

Open
callycodes wants to merge 1861 commits intoAllTheTables:mainfrom
vercel:main
Open

backport v6#5
callycodes wants to merge 1861 commits intoAllTheTables:mainfrom
vercel:main

Conversation

@callycodes
Copy link

Background

Summary

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • Formatting issues have been fixed (run pnpm prettier-fix in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

vercel-ai-sdk bot and others added 30 commits February 26, 2026 17:20
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   91f8777: fix(bedrock/groq): pass strict mode for tools

## @ai-sdk/[email protected]

### Patch Changes

-   91f8777: fix(bedrock/groq): pass strict mode for tools

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…ratios and sizes (#12897)

## Background

With the launch of Gemini 3.1 Image, a few new image aspect ratios and
one new size were introduced.

Reference from the source:
googleapis/python-genai@8b2a4e0

## Summary

Adds the new image aspect ratios.

## Manual Verification

Run the updated example.

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

N/A
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

- 1ece97a: feat(provider/google): add support for new Google image model
aspect ratios and sizes

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [1ece97a]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…12901)

## Summary

Fixes #12770

Azure AI Foundry and Mistral models deployed on Azure omit the `type`
field in streaming `tool_calls` deltas. The OpenAI chat stream parser
was throwing:

```
InvalidResponseDataError: Expected 'function' type.
```

## Root Cause

In `openai-chat-language-model.ts`, the parser checked `if
(toolCallDelta.type !== 'function')` at the start of a new tool call.
Azure / Mistral omit the `type` field entirely (it is `undefined`), so
`undefined !== 'function'` triggered the error before even reading the
function name or id.

## Fix

Changed the guard from:
```ts
if (toolCallDelta.type !== 'function') {
```
to:
```ts
if (toolCallDelta.type != null && toolCallDelta.type !== 'function') {
```

A missing `type` is now silently treated as `"function"` (the only valid
value). An explicit non-`"function"` type still throws.

## Test

Added a new streaming test case that sends tool call deltas without a
`type` field (matching Azure / Mistral behaviour) and verifies the tool
call is parsed correctly.

Co-authored-by: sleitor <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [53bdfa5]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 53bdfa5: fix(openai): allow null/undefined type in streaming tool call
deltas

Azure AI Foundry and Mistral deployed on Azure omit the `type` field in
streaming tool_calls deltas. The chat stream parser now accepts a
missing
    `type` field (treating it as `"function"`) instead of throwing
    `InvalidResponseDataError: Expected 'function' type.`

    Fixes #12770

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

#12794

anthropic silently released a new code execution tool (which is
suggested for programmatic tool calling)

ref here:
https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling

## Summary

adds the new tool to the registry, updates the api schema, makes sure
the the conversion from prompts to responses is correctly mapped

## Manual Verification

verified by running the examples
-
`examples/ai-functions/src/generate-text/anthropic/code-execution-20260120.ts`
-
`examples/ai-functions/src/stream-text/anthropic/code-execution-20260120.ts`
-
`examples/ai-functions/src/generate-text/anthropic/programmatic-tool-calling.ts`
- `http://localhost:3000/chat/anthropic-programmatic-tool-calling`

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixe #12794
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2164cdf]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   2164cdf: feat(anthropic): add the new code_execution tool

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2164cdf]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…g.format (#12319)

## Background

#12298 

## Summary

- **`@ai-sdk/anthropic`**: Migrated the deprecated `output_format`
request parameter to `output_config.format`, aligning with the [current
Anthropic
API](https://platform.claude.com/docs/en/build-with-claude/structured-outputs).
The `effort` and `format` fields are now merged into a single
`output_config` object to avoid one spread overwriting the other.
- **`@ai-sdk/amazon-bedrock`**: Enabled `supportsNativeStructuredOutput:
true` for Bedrock Anthropic models. Structured outputs are now GA on
Bedrock and no longer require a beta header, so the JSON tool fallback
is no longer necessary.

## Manual Verification

verified the fix by running the following code snippet before and after
the changes

<Details> 
<summary> repro </summary>

```ts
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText, Output } from 'ai';
import 'dotenv/config';
import { z } from 'zod';
import { run } from '../../lib/run';

run(async () => {
  const result = await generateText({
    model: bedrockAnthropic('us.anthropic.claude-opus-4-6-v1'),
    output: Output.object({
      schema: z.object({
        recipe: z.object({
          name: z.string(),
          ingredients: z.array(
            z.object({
              name: z.string(),
              amount: z.string(),
            }),
          ),
          steps: z.array(z.string()),
        }),
      }),
    }),
    providerOptions: {
      anthropic: {
        structuredOutputMode: "outputFormat",
        thinking: { type: "adaptive" },
        effort: "low",
      },
    },
    prompt: 'Generate a lasagna recipe.',
  });

  console.log('Recipe:', JSON.stringify(result.output, null, 2));
  console.log();
  console.log('Finish reason:', result.finishReason);
  console.log('Usage:', result.usage);
});

```
</Details>


## Related Issues

Fixes #12298

---------

Co-authored-by: Aayush Kapoor <[email protected]>
Co-authored-by: Aayush Kapoor <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

- d98d9ba: Migrated deprecated `output_format` parameter to
`output_config.format` for structured outputs + Enabled native
structured output support for Bedrock Anthropic models via
`output_config.format`.
-   Updated dependencies [d98d9ba]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- d98d9ba: Migrated deprecated `output_format` parameter to
`output_config.format` for structured outputs + Enabled native
structured output support for Bedrock Anthropic models via
`output_config.format`.

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d98d9ba]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## [email protected]

### Patch Changes

-   Updated dependencies [1330f2f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 1330f2f: chore(provider/gateway): update gateway model settings files

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## background

adds support for openai custom tools in responses and fixes alias
mapping failures that could cause runtime errors

refs
https://developers.openai.com/api/reference/resources/responses/methods/create/

## summary

- add custom tool support for responses with grammar formats
- resolve aliased custom tool names end to end across tool choice,
parsing, and streaming
- map provider tool names back to sdk tool keys so tool calls return the
user-facing key
- support custom tool output content mapping instead of dropping to
empty output
- add repro examples for forced and unforced alias flows

## before this fix

- forced alias tool choice could fail with api call errors because
custom tools were resolved as function tool choice
- unforced alias calls could fail with no such tool errors when provider
name and sdk key differed

## after this fix

- forced alias tool choice resolves to `{ type: 'custom', name:
'write_sql' }` and executes correctly
- returned tool call and tool result names stay as sdk key `alias_name`
- unforced runs no longer fail due to alias mismatch and may validly
return no tool calls when model answers directly

## repro

<details>
<summary>repro-alias-forced</summary>

### before fix (`368bbdd468`)

```bash
git checkout 368bbdd
cd examples/ai-functions
pnpm tsx src/generate-text/openai/repro-alias-forced.ts
```

result

- `AI_APICallError: Tool choice 'function' not found in 'tools'
parameter`
- request body includes `tool_choice: { type: 'function', name:
'alias_name' }`

### after fix (`a3565b08c2`)

```bash
git checkout a3565b0
cd examples/ai-functions
pnpm tsx src/generate-text/openai/repro-alias-forced.ts
```

result

- succeeds with tool execution
- tool call and tool result use sdk key `alias_name`
- `Steps: 2`

</details>

<details>
<summary>repro-alias-unforced</summary>

### before fix (`368bbdd468`)

```bash
git checkout 368bbdd
cd examples/ai-functions
pnpm tsx src/generate-text/openai/repro-alias-unforced.ts
```

observed behavior (multiple runs)

- run 1: direct text response, `toolCalls: []`
- run 2: `AI_NoSuchToolError` when model emits `write_sql` but available
tool key is `alias_name`
- run 3: direct text response, `toolCalls: []`

### after fix (`a3565b08c2`)

```bash
git checkout a3565b0
cd examples/ai-functions
pnpm tsx src/generate-text/openai/repro-alias-unforced.ts
```

observed behavior (multiple runs)

- no alias mismatch errors
- direct text responses with `toolCalls: []`, `toolResults: []`, `Steps:
2`

</details>

## verification

- `pnpm tsx src/generate-text/openai/responses-custom-tool.ts`
- `pnpm tsx src/stream-text/openai/responses-custom-tool.ts`
- `pnpm tsx
src/generate-text/openai/responses-custom-tool-multi-turn.ts`
- `pnpm tsx src/stream-text/openai/responses-custom-tool-multi-turn.ts`

## checklist

- [x] tests have been added / updated (for bug fixes / features)
- [x] documentation has been added / updated (for bug fixes / features)
- [x] a _patch_ changeset for relevant packages has been added (run
`pnpm changeset` in root)
- [x] i have reviewed this pull request (self-review)

## related issues

fixes #12614

---------

Co-authored-by: dancer <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## [email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 58bc42d: feat(provider/openai): support custom tools with alias
mapping
-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 58bc42d: feat(provider/openai): support custom tools with alias
mapping

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [58bc42d]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
… responses (#12934)

## background

xai supports `logprobs` and `top_logprobs` in both chat and responses
requests
`@ai-sdk/xai` did not expose these options in provider options

## summary

- add `logprobs` and `topLogprobs` to xai chat provider options
- add `logprobs` and `topLogprobs` to xai responses provider options
- map `providerOptions.xai.logprobs` and
`providerOptions.xai.topLogprobs` to request fields `logprobs` and
`top_logprobs`
- auto-enable `logprobs` when `topLogprobs` is set
- add test coverage for chat and responses request forwarding
- update xai chat snapshots for new request fields
- add ai-functions examples for logprobs in stream-text and
generate-text
- switch generate-text example to `xai.responses('grok-4-latest')` so
the examples cover both chat and responses
- document `logprobs` and `topLogprobs` in xai chat and responses
provider options docs

## manual verification

- `cd examples/ai-functions && pnpm tsx
src/generate-text/xai/logprobs.ts`
- `cd examples/ai-functions && pnpm tsx src/stream-text/xai/logprobs.ts`

## checklist

- [x] tests have been added / updated (for bug fixes / features)
- [x] documentation has been added / updated (for bug fixes / features)
- [x] a _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] i have reviewed this pull request (self-review)

## related issues

related #12825
related #12826
related #12827
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

- 2e00e03: add support for `logprobs` and `topLogprobs` in xai chat and
responses provider options

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## [email protected]

### Patch Changes

-   Updated dependencies [29e9f4d]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 29e9f4d: chore(provider/gateway): update gateway model settings files

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…12923)

## Background

Follow up to:
- #12807
- #12808
- #12809
- #12810

These issues were opened with type "New provider", so models listed in
"New models" in this context does not mean these are new models, but
rather that they are _all_ models that the provider API supports.

This PR removes any models that were not in this list but still in our
codebase.

## Summary

- Removes obsolete model IDs across the four providers, being
conservative to err on the side of keeping one if it _might_ still work:
- e.g. aliases (model version that lacks the date suffix of a model
that's still supported, model version of a supported model that appends
"-latest")
- model IDs that are deemed obsolete here but may be present on other
providers (e.g. Gateway, Amazon Bedrock, Google Vertex) remain untouched
there
- Replaces usage in our examples with suitable newer replacement models
- Replaces usage in documentation code snippets and removes mentions in
documentation model lists or model tables

## Checklist

<!--
Do not edit this list. Leave items unchecked that don't apply. If you
need to track subtasks, create a new "## Tasks" section

Please check if the PR fulfills the following requirements:
-->

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [64a8fae]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 64a8fae: chore: remove obsolete model IDs for Anthropic, Google,
OpenAI, xAI

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [64a8fae]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 64a8fae: chore: remove obsolete model IDs for Anthropic, Google,
OpenAI, xAI

## @ai-sdk/[email protected]

### Patch Changes

- 64a8fae: chore: remove obsolete model IDs for Anthropic, Google,
OpenAI, xAI
-   Updated dependencies [64a8fae]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 64a8fae: chore: remove obsolete model IDs for Anthropic, Google,
OpenAI, xAI

## @ai-sdk/[email protected]

### Patch Changes

- 64a8fae: chore: remove obsolete model IDs for Anthropic, Google,
OpenAI, xAI

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

Provider instance name has changed, updated docs to reflect changes.

## Summary

`providers\03-community-providers\09-browser-ai.mdx` updated
## Background

This PR normalizes Bedrock document names derived from part.filename by
stripping file extensions before sending requests in order to avoid
Bedrock throwing an exception.

## Summary

- Added a shared stripFileExtension(filename: string) helper to
@ai-sdk/provider-utils.
- Exported the helper from provider-utils public index.
- Updated Amazon Bedrock chat message conversion to use the helper for
document name when part.filename is present.
- Updated/added tests for Bedrock conversion behavior.
- Added unit tests for the new helper.

## Manual Verfication

<details> 
<summary> repro example:</summary>

```ts
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
import fs from 'fs';
import { run } from '../../lib/run';

run(async () => {
  const result = await generateText({
    model: bedrock('global.anthropic.claude-sonnet-4-5-20250929-v1:0'),
    messages: [
      {
        role: 'user',
        content: [
          {
            type: 'text',
            text: 'Summarize the content of this text file in a few sentences.',
          },
          {
            type: 'file',
            data: fs.readFileSync('./data/error-message.txt'),
            mediaType: 'text/plain',
            filename: 'error-message.txt',
          },
        ],
      },
    ],
  });

  console.log('Response:', result.text);
  console.log();
  console.log('Finish reason:', result.finishReason);
  console.log('Usage:', result.usage);
});

```
</details>

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

n/a

## Related Issues

Fixes #11518

---------

Co-authored-by: Aayush Kapoor <[email protected]>
Co-authored-by: Aayush Kapoor <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## [email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   08336f1: fix(bedrock): strip file extensions from filename
-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   08336f1: fix(bedrock): strip file extensions from filename

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [08336f1]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

reported in issue #12965

we decoded the base64 data properly in the anthropic provider but not in
openai-compat

## Summary

- added `convertBase64ToUint8Array` to properly decode the string before
converting to text
- replaced `Buffer.from(data, 'base64').toString('utf-8')` with the
edge-runtime-safe equivalent using `convertBase64ToUint8Array`

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixes #12965
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   89caf28: fix(openai-compat): decode base64 string data

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   89caf28: fix(openai-compat): decode base64 string data

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [89caf28]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

we needed an interface that could be implemented by telemetry providers
for capturing all the data that will be propagated via the callbacks

## Summary

- `TelemetryHandler` : interface to create custom telemetry integrations
- `expand-handlers.ts` : file turns handlers: [handlerA, handlerB,
handlerC] into a single object with one callback per lifecycle event,
where each callback fans out to every handler that needs info about that
event
- added new option `handlers` to the `telemetry-settings` for handlers
that receive lifecycle events during generation

## Future Work

make the `integrations` option work behind the scenes, ie without a need
for it to be in experimental_telemetry

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## [email protected]

### Patch Changes

-   2a4f512: feat(ai): add telemetry interface and registry

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [2a4f512]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

We regularly have to add new provider models - whether prompted
individually or via issues like #12807, #12808, #12809, #12810.

Sometimes, we also need to remove models that are no longer available in
a provider's API.

Both of these workflows are pretty basic, but tedious to do as often as
they come up - perfect task for an agent :)

## Summary

Adds a skill that can be used to add a single model ID, remove a single
model ID, or process an entire issue of model ID changes (can be a mix
of additions and removals).

Claude Code and I drafted this, based on some manual instructions, but
more importantly actual example diffs I implemented (with some fake
models) to give it some points of reference on how to do these workflows
right:

<details>

<summary><code>add-claude-haiku-4-5-20260218.diff</code></summary>

```diff
diff --git a/content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx b/content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx
index 3a2c1cb..dfbfea700 100644
--- a/content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx
+++ b/content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx
@@ -708,6 +708,7 @@ These tools can be used in conjunction with the `anthropic.claude-3-5-sonnet-202
 | `us.amazon.nova-pro-v1:0`                      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.amazon.nova-lite-v1:0`                     | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.amazon.nova-micro-v1:0`                    | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
+| `anthropic.claude-haiku-4-5-20260218-v1:0`     | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `anthropic.claude-haiku-4-5-20251001-v1:0`     | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `anthropic.claude-sonnet-4-20250514-v1:0`      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `anthropic.claude-sonnet-4-5-20250929-v1:0`    | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
@@ -1519,6 +1520,7 @@ Anthropic has reasoning support for Claude 3.7 and Claude 4 models on Bedrock, i
 - `us.anthropic.claude-opus-4-20250514-v1:0`
 - `us.anthropic.claude-sonnet-4-20250514-v1:0`
 - `us.anthropic.claude-opus-4-1-20250805-v1:0`
+- `us.anthropic.claude-haiku-4-5-20260218-v1:0`
 - `us.anthropic.claude-haiku-4-5-20251001-v1:0`
 
 You can enable it using the `thinking` provider option and specifying a thinking budget in tokens.
@@ -1555,6 +1557,7 @@ on how to integrate reasoning into your chatbot.
 | `us.anthropic.claude-opus-4-20250514-v1:0`     | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.anthropic.claude-sonnet-4-20250514-v1:0`   | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.anthropic.claude-opus-4-1-20250805-v1:0`   | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
+| `us.anthropic.claude-haiku-4-5-20260218-v1:0`  | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.anthropic.claude-haiku-4-5-20251001-v1:0`  | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `us.anthropic.claude-3-5-sonnet-20241022-v2:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
 
diff --git a/content/providers/03-community-providers/31-opencode-sdk.mdx b/content/providers/03-community-providers/31-opencode-sdk.mdx
index 9a234f9..9d96e43b4 100644
--- a/content/providers/03-community-providers/31-opencode-sdk.mdx
+++ b/content/providers/03-community-providers/31-opencode-sdk.mdx
@@ -85,7 +85,7 @@ import { OpencodeModels } from 'ai-sdk-provider-opencode-sdk';
 // Anthropic Claude
 opencode(OpencodeModels['claude-opus-4-5']); // anthropic/claude-opus-4-5-20251101
 opencode(OpencodeModels['claude-sonnet-4-5']); // anthropic/claude-sonnet-4-5-20250929
-opencode(OpencodeModels['claude-haiku-4-5']); // anthropic/claude-haiku-4-5-20251001
+opencode(OpencodeModels['claude-haiku-4-5']); // anthropic/claude-haiku-4-5-20260218
 
 // OpenAI GPT
 opencode(OpencodeModels['gpt-4o']); // openai/gpt-4o
diff --git a/examples/ai-functions/src/generate-text/amazon/bedrock-anthropic-all-models.ts b/examples/ai-functions/src/generate-text/amazon/bedrock-anthropic-all-models.ts
index 035128e..093b32cb4 100644
--- a/examples/ai-functions/src/generate-text/amazon/bedrock-anthropic-all-models.ts
+++ b/examples/ai-functions/src/generate-text/amazon/bedrock-anthropic-all-models.ts
@@ -10,6 +10,7 @@ const models = [
   'us.anthropic.claude-sonnet-4-20250514-v1:0',
   'us.anthropic.claude-opus-4-1-20250805-v1:0',
   'us.anthropic.claude-haiku-4-5-20251001-v1:0',
+  'us.anthropic.claude-haiku-4-5-20260218-v1:0',
   'us.anthropic.claude-3-5-sonnet-20241022-v2:0',
   'us.anthropic.claude-3-opus-20240229-v1:0',
   'us.anthropic.claude-3-haiku-20240307-v1:0',
diff --git a/examples/ai-functions/src/generate-text/anthropic/haiku-20260218.ts b/examples/ai-functions/src/generate-text/anthropic/haiku-20260218.ts
new file mode 100644
index 000000000..7dd1640da
--- /dev/null
+++ b/examples/ai-functions/src/generate-text/anthropic/haiku-20260218.ts
@@ -0,0 +1,17 @@
+import { anthropic } from '@ai-sdk/anthropic';
+import { generateText } from 'ai';
+import { run } from '../../lib/run';
+import { print } from '../../lib/print';
+
+run(async () => {
+  const result = await generateText({
+    model: anthropic('claude-haiku-4-5-20260218'),
+    prompt: 'Invent a new holiday and describe its traditions.',
+    maxRetries: 0,
+  });
+
+  print('Content:', result.content);
+  print('Usage:', result.usage);
+  print('Finish reason:', result.finishReason);
+  print('Raw finish reason:', result.rawFinishReason);
+});
diff --git a/examples/ai-functions/src/stream-text/amazon/bedrock-anthropic-all-models.ts b/examples/ai-functions/src/stream-text/amazon/bedrock-anthropic-all-models.ts
index 77e221a..0000d7cb3 100644
--- a/examples/ai-functions/src/stream-text/amazon/bedrock-anthropic-all-models.ts
+++ b/examples/ai-functions/src/stream-text/amazon/bedrock-anthropic-all-models.ts
@@ -10,6 +10,7 @@ const models = [
   'us.anthropic.claude-sonnet-4-20250514-v1:0',
   'us.anthropic.claude-opus-4-1-20250805-v1:0',
   'us.anthropic.claude-haiku-4-5-20251001-v1:0',
+  'us.anthropic.claude-haiku-4-5-20260218-v1:0',
   'us.anthropic.claude-3-5-sonnet-20241022-v2:0',
   'us.anthropic.claude-3-opus-20240229-v1:0',
   'us.anthropic.claude-3-haiku-20240307-v1:0',
diff --git a/examples/ai-functions/src/stream-text/anthropic/haiku-20260218.ts b/examples/ai-functions/src/stream-text/anthropic/haiku-20260218.ts
new file mode 100644
index 000000000..778aa0650
--- /dev/null
+++ b/examples/ai-functions/src/stream-text/anthropic/haiku-20260218.ts
@@ -0,0 +1,19 @@
+import { anthropic } from '@ai-sdk/anthropic';
+import { streamText } from 'ai';
+import { print } from '../../lib/print';
+import { printFullStream } from '../../lib/print-full-stream';
+import { run } from '../../lib/run';
+
+run(async () => {
+  const result = streamText({
+    model: anthropic('claude-haiku-4-5-20260218'),
+    prompt: 'Invent a new holiday and describe its traditions.',
+    maxRetries: 0,
+  });
+
+  printFullStream({ result });
+
+  print('Usage:', await result.usage);
+  print('Finish reason:', await result.finishReason);
+  print('Raw finish reason:', await result.rawFinishReason);
+});
diff --git a/examples/ai-functions/src/stream-text/anthropic/tool-call-8516.ts b/examples/ai-functions/src/stream-text/anthropic/tool-call-8516.ts
index b8bef9a..5d40aa5f9 100644
--- a/examples/ai-functions/src/stream-text/anthropic/tool-call-8516.ts
+++ b/examples/ai-functions/src/stream-text/anthropic/tool-call-8516.ts
@@ -7,7 +7,7 @@ import z from 'zod';
 
 run(async () => {
   const result = streamText({
-    model: anthropic('claude-haiku-4-5-20251001'),
+    model: anthropic('claude-haiku-4-5-20260218'),
     messages: [
       {
         role: 'user',
diff --git a/packages/amazon-bedrock/src/anthropic/bedrock-anthropic-options.ts b/packages/amazon-bedrock/src/anthropic/bedrock-anthropic-options.ts
index eca186b..96e182085 100644
--- a/packages/amazon-bedrock/src/anthropic/bedrock-anthropic-options.ts
+++ b/packages/amazon-bedrock/src/anthropic/bedrock-anthropic-options.ts
@@ -6,6 +6,7 @@ export type BedrockAnthropicModelId =
   | 'anthropic.claude-opus-4-20250514-v1:0'
   | 'anthropic.claude-sonnet-4-20250514-v1:0'
   | 'anthropic.claude-opus-4-1-20250805-v1:0'
+  | 'anthropic.claude-haiku-4-5-20260218-v1:0'
   | 'anthropic.claude-haiku-4-5-20251001-v1:0'
   | 'anthropic.claude-3-7-sonnet-20250219-v1:0'
   | 'anthropic.claude-3-5-sonnet-20241022-v2:0'
@@ -21,6 +22,7 @@ export type BedrockAnthropicModelId =
   | 'us.anthropic.claude-opus-4-20250514-v1:0'
   | 'us.anthropic.claude-sonnet-4-20250514-v1:0'
   | 'us.anthropic.claude-opus-4-1-20250805-v1:0'
+  | 'us.anthropic.claude-haiku-4-5-20260218-v1:0'
   | 'us.anthropic.claude-haiku-4-5-20251001-v1:0'
   | 'us.anthropic.claude-3-7-sonnet-20250219-v1:0'
   | 'us.anthropic.claude-3-5-sonnet-20241022-v2:0'
diff --git a/packages/amazon-bedrock/src/bedrock-chat-options.ts b/packages/amazon-bedrock/src/bedrock-chat-options.ts
index 51b9d47..b4c251210 100644
--- a/packages/amazon-bedrock/src/bedrock-chat-options.ts
+++ b/packages/amazon-bedrock/src/bedrock-chat-options.ts
@@ -10,6 +10,7 @@ export type BedrockChatModelId =
   | 'anthropic.claude-opus-4-6-v1'
   | 'anthropic.claude-sonnet-4-6-v1'
   | 'anthropic.claude-opus-4-5-20251101-v1:0'
+  | 'anthropic.claude-haiku-4-5-20260218-v1:0'
   | 'anthropic.claude-haiku-4-5-20251001-v1:0'
   | 'anthropic.claude-sonnet-4-5-20250929-v1:0'
   | 'anthropic.claude-sonnet-4-20250514-v1:0'
@@ -61,6 +62,7 @@ export type BedrockChatModelId =
   | 'us.anthropic.claude-sonnet-4-20250514-v1:0'
   | 'us.anthropic.claude-opus-4-20250514-v1:0'
   | 'us.anthropic.claude-opus-4-1-20250805-v1:0'
+  | 'us.anthropic.claude-haiku-4-5-20260218-v1:0'
   | 'us.anthropic.claude-haiku-4-5-20251001-v1:0'
   | 'us.meta.llama3-2-11b-instruct-v1:0'
   | 'us.meta.llama3-2-3b-instruct-v1:0'
diff --git a/packages/anthropic/src/anthropic-messages-options.ts b/packages/anthropic/src/anthropic-messages-options.ts
index 83ee25c..d5c27ed0d 100644
--- a/packages/anthropic/src/anthropic-messages-options.ts
+++ b/packages/anthropic/src/anthropic-messages-options.ts
@@ -4,6 +4,7 @@ import { z } from 'zod/v4';
 export type AnthropicMessagesModelId =
   | 'claude-3-haiku-20240307'
   | 'claude-haiku-4-5-20251001'
+  | 'claude-haiku-4-5-20260218'
   | 'claude-haiku-4-5'
   | 'claude-opus-4-0'
   | 'claude-opus-4-20250514'

```

</details>

<details>

<summary><code>add-gemini-3.1-pro.diff</code></summary>

```diff
diff --git a/content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx b/content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx
index 54ab37a..67d51f0b8 100644
--- a/content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx
+++ b/content/providers/01-ai-sdk-providers/15-google-generative-ai.mdx
@@ -262,7 +262,7 @@ For Gemini 3 models, use the `thinkingLevel` parameter to control the depth of r
 import { google, GoogleLanguageModelOptions } from '@ai-sdk/google';
 import { generateText } from 'ai';
 
-const model = google('gemini-3.1-pro-preview');
+const model = google('gemini-3.1-pro');
 
 const { text, reasoning } = await generateText({
   model: model,
@@ -1048,6 +1048,7 @@ The following Zod features are known to not work with Google Generative AI:
 
 | Model                                 | Image Input         | Object Generation   | Tool Usage          | Tool Streaming      | Google Search       | URL Context         |
 | ------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
+| `gemini-3.1-pro`                      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3.1-pro-preview`              | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3.1-flash-image-preview`      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3-pro-preview`                | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/content/providers/01-ai-sdk-providers/index.mdx b/content/providers/01-ai-sdk-providers/index.mdx
index f5a5283..f93f80ee2 100644
--- a/content/providers/01-ai-sdk-providers/index.mdx
+++ b/content/providers/01-ai-sdk-providers/index.mdx
@@ -42,10 +42,12 @@ Not all providers support all AI SDK features. Here's a quick comparison of the
 | [Anthropic](/providers/ai-sdk-providers/anthropic)                       | `claude-haiku-4-5`                                  | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Anthropic](/providers/ai-sdk-providers/anthropic)                       | `claude-opus-4-1`                                   | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Anthropic](/providers/ai-sdk-providers/anthropic)                       | `claude-sonnet-4-0`                                 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
+| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-3.1-pro`                                    | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-3.1-pro-preview`                            | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-3-pro-preview`                              | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.5-pro`                                    | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.5-flash`                                  | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
+| [Google Vertex](/providers/ai-sdk-providers/google-vertex)               | `gemini-3.1-pro`                                    | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Vertex](/providers/ai-sdk-providers/google-vertex)               | `gemini-3.1-pro-preview`                            | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Vertex](/providers/ai-sdk-providers/google-vertex)               | `gemini-3-pro-preview`                              | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Google Vertex](/providers/ai-sdk-providers/google-vertex)               | `gemini-2.5-pro`                                    | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/content/providers/03-community-providers/18-gemini-cli.mdx b/content/providers/03-community-providers/18-gemini-cli.mdx
index f60cb34..5dc5046b7 100644
--- a/content/providers/03-community-providers/18-gemini-cli.mdx
+++ b/content/providers/03-community-providers/18-gemini-cli.mdx
@@ -94,7 +94,7 @@ const model = gemini('gemini-2.5-pro');
 
 Supported models:
 
-- **gemini-3.1-pro-preview**: Latest model with enhanced reasoning (supports `thinkingLevel`)
+- **gemini-3.1-pro**: Latest model with enhanced reasoning (supports `thinkingLevel`)
 - **gemini-3-flash-preview**: Fast Gemini 3 model (supports `thinkingLevel`)
 - **gemini-2.5-pro**: Production-ready model with 64K output tokens (supports `thinkingBudget`)
 - **gemini-2.5-flash**: Fast, efficient model with 64K output tokens (supports `thinkingBudget`)
@@ -118,7 +118,7 @@ const { text } = await generateText({
 ### Model Settings
 
 ```ts
-const model = gemini('gemini-3.1-pro-preview', {
+const model = gemini('gemini-3.1-pro', {
   temperature: 0.7,
   topP: 0.95,
   topK: 40,
@@ -135,6 +135,7 @@ const model = gemini('gemini-3.1-pro-preview', {
 
 | Model                    | Image Input         | Object Generation   | Tool Usage          | Tool Streaming      |
 | ------------------------ | ------------------- | ------------------- | ------------------- | ------------------- |
+| `gemini-3.1-pro`         | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3.1-pro-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3-pro-preview`   | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gemini-3-flash-preview` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/content/providers/03-community-providers/31-opencode-sdk.mdx b/content/providers/03-community-providers/31-opencode-sdk.mdx
index 9a234f9..41619ca8c 100644
--- a/content/providers/03-community-providers/31-opencode-sdk.mdx
+++ b/content/providers/03-community-providers/31-opencode-sdk.mdx
@@ -92,7 +92,7 @@ opencode(OpencodeModels['gpt-4o']); // openai/gpt-4o
 opencode(OpencodeModels['gpt-4o-mini']); // openai/gpt-4o-mini
 
 // Google Gemini
-opencode(OpencodeModels['gemini-3.1-pro-preview']); // google/gemini-3.1-pro-preview
+opencode(OpencodeModels['gemini-3.1-pro']); // google/gemini-3.1-pro
 opencode(OpencodeModels['gemini-2.5-pro']); // google/gemini-2.5-pro
 opencode(OpencodeModels['gemini-2.5-flash']); // google/gemini-2.5-flash
 opencode(OpencodeModels['gemini-2.0-flash']); // google/gemini-2.0-flash
@@ -103,7 +103,7 @@ You can also use full model identifiers:
 ```ts
 opencode('openai/gpt-5.1-codex');
 opencode('openai/gpt-5.1-codex-max');
-opencode('google/gemini-3.1-pro-preview');
+opencode('google/gemini-3.1-pro');
 \`\`\`
 
 ### Example
diff --git a/content/providers/03-community-providers/47-apertis.mdx b/content/providers/03-community-providers/47-apertis.mdx
index 24ee722..3e9c72d18 100644
--- a/content/providers/03-community-providers/47-apertis.mdx
+++ b/content/providers/03-community-providers/47-apertis.mdx
@@ -63,7 +63,7 @@ const model = apertis.chat('claude-sonnet-4.5');
 
 - **OpenAI**: `gpt-5.2`, `gpt-5.2-chat`, `gpt-5.2-pro`
 - **Anthropic**: `claude-opus-4-5-20251101`, `claude-sonnet-4.5`, `claude-haiku-4.5`
-- **Google**: `gemini-3.1-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro`
+- **Google**: `gemini-3.1-pro`, `gemini-3-flash-preview`, `gemini-2.5-pro`
 - **Other**: `glm-4.7`, `minimax-m2.1`, and 470+ more
 
 ## Embedding Models
diff --git a/content/providers/03-community-providers/49-cencori.mdx b/content/providers/03-community-providers/49-cencori.mdx
index f5b1906..152aeae6b 100644
--- a/content/providers/03-community-providers/49-cencori.mdx
+++ b/content/providers/03-community-providers/49-cencori.mdx
@@ -64,7 +64,7 @@ const opus = cencori('claude-3-opus');
 
 // Google Gemini models
 const gemini = cencori('gemini-2.5-flash');
-const geminiPro = cencori('gemini-3.1-pro-preview');
+const geminiPro = cencori('gemini-3.1-pro');
 
 // Other providers
 const mistral = cencori('mistral-large');
diff --git a/examples/ai-functions/src/generate-text/google/vertex-tool-call.ts b/examples/ai-functions/src/generate-text/google/vertex-tool-call.ts
index 5d2e356..8dac061a4 100644
--- a/examples/ai-functions/src/generate-text/google/vertex-tool-call.ts
+++ b/examples/ai-functions/src/generate-text/google/vertex-tool-call.ts
@@ -5,7 +5,7 @@ import { run } from '../../lib/run';
 
 run(async () => {
   const { text } = await generateText({
-    model: vertex('gemini-3.1-pro-preview'),
+    model: vertex('gemini-3.1-pro'),
     prompt: 'What is the weather in New York City? ',
     tools: {
       weather: tool({
diff --git a/examples/ai-functions/src/generate-text/openai/compatible-google-thought-signatures.ts b/examples/ai-functions/src/generate-text/openai/compatible-google-thought-signatures.ts
index 441c7e2..78281e4d0 100644
--- a/examples/ai-functions/src/generate-text/openai/compatible-google-thought-signatures.ts
+++ b/examples/ai-functions/src/generate-text/openai/compatible-google-thought-signatures.ts
@@ -12,7 +12,7 @@ run(async () => {
     },
   });
 
-  const model = googleOpenAI.chatModel('gemini-3.1-pro-preview');
+  const model = googleOpenAI.chatModel('gemini-3.1-pro');
 
   const tools = {
     check_flight: tool({
diff --git a/examples/ai-functions/src/stream-text/google/multiturn-tool-error.ts b/examples/ai-functions/src/stream-text/google/multiturn-tool-error.ts
index 8c0686b..42994756e 100644
--- a/examples/ai-functions/src/stream-text/google/multiturn-tool-error.ts
+++ b/examples/ai-functions/src/stream-text/google/multiturn-tool-error.ts
@@ -15,7 +15,7 @@ run(async () => {
 
   console.log('=== turn 1: tool call that will naturally fail ===');
   const turn1 = streamText({
-    model: google('gemini-3.1-pro-preview'),
+    model: google('gemini-3.1-pro'),
     tools: {
       readuserdata: tool({
         description: 'read user data from file',
@@ -127,7 +127,7 @@ run(async () => {
 
   try {
     const turn2 = streamText({
-      model: google('gemini-3.1-pro-preview'),
+      model: google('gemini-3.1-pro'),
       messages: messagesForTurn2,
       includeRawChunks: true,
       tools: {
@@ -181,7 +181,7 @@ run(async () => {
     ];
 
     const turn3 = streamText({
-      model: google('gemini-3.1-pro-preview'),
+      model: google('gemini-3.1-pro'),
       messages: messagesForTurn3,
       includeRawChunks: true,
       tools: {
diff --git a/examples/ai-functions/src/stream-text/openai/compatible-google-thought-signatures.ts b/examples/ai-functions/src/stream-text/openai/compatible-google-thought-signatures.ts
index ffd70ea..edb374de8 100644
--- a/examples/ai-functions/src/stream-text/openai/compatible-google-thought-signatures.ts
+++ b/examples/ai-functions/src/stream-text/openai/compatible-google-thought-signatures.ts
@@ -12,7 +12,7 @@ run(async () => {
     },
   });
 
-  const model = googleOpenAI.chatModel('gemini-3.1-pro-preview');
+  const model = googleOpenAI.chatModel('gemini-3.1-pro');
 
   const turn1 = streamText({
     model,
diff --git a/packages/gateway/src/gateway-language-model-settings.ts b/packages/gateway/src/gateway-language-model-settings.ts
index 5888f7d..3baa4a755 100644
--- a/packages/gateway/src/gateway-language-model-settings.ts
+++ b/packages/gateway/src/gateway-language-model-settings.ts
@@ -59,6 +59,7 @@ export type GatewayModelId =
   | 'google/gemini-3-pro-preview'
   | 'google/gemini-3.1-flash-image-preview'
   | 'google/gemini-3.1-pro-preview'
+  | 'google/gemini-3.1-pro'
   | 'inception/mercury-coder-small'
   | 'kwaipilot/kat-coder-pro-v1'
   | 'meituan/longcat-flash-chat'
diff --git a/packages/google-vertex/src/google-vertex-options.ts b/packages/google-vertex/src/google-vertex-options.ts
index 8bd0b22..5121d3381 100644
--- a/packages/google-vertex/src/google-vertex-options.ts
+++ b/packages/google-vertex/src/google-vertex-options.ts
@@ -27,6 +27,7 @@ export type GoogleVertexModelId =
   | 'gemini-3-pro-image-preview'
   | 'gemini-3-flash-preview'
   | 'gemini-3.1-pro-preview'
+  | 'gemini-3.1-pro'
   | 'gemini-3.1-flash-image-preview'
   // Experimental models
   | 'gemini-2.0-pro-exp-02-05'
diff --git a/packages/google/src/google-generative-ai-options.ts b/packages/google/src/google-generative-ai-options.ts
index e29d1c6..b6c44cfac 100644
--- a/packages/google/src/google-generative-ai-options.ts
+++ b/packages/google/src/google-generative-ai-options.ts
@@ -25,6 +25,7 @@ export type GoogleGenerativeAIModelId =
   | 'gemini-3-flash-preview'
   | 'gemini-3.1-pro-preview'
   | 'gemini-3.1-pro-preview-customtools'
+  | 'gemini-3.1-pro'
   | 'gemini-3.1-flash-image-preview'
   // latest version
   // https://ai.google.dev/gemini-api/docs/models#latest
diff --git a/packages/google/src/google-prepare-tools.test.ts b/packages/google/src/google-prepare-tools.test.ts
index 163bd63..31d1383fc 100644
--- a/packages/google/src/google-prepare-tools.test.ts
+++ b/packages/google/src/google-prepare-tools.test.ts
@@ -196,7 +196,7 @@ it('should correctly prepare file search tool for gemini-3 models', () => {
         },
       },
     ],
-    modelId: 'gemini-3.1-pro-preview',
+    modelId: 'gemini-3.1-pro',
   });
 
   expect(result.tools).toEqual([
@@ -391,7 +391,7 @@ it('should handle gemini-3 modelId for provider-defined tools correctly', () =>
         args: {},
       },
     ],
-    modelId: 'gemini-3.1-pro-preview',
+    modelId: 'gemini-3.1-pro',
   });
   expect(result.tools).toEqual([{ googleSearch: {} }]);
   expect(result.toolConfig).toBeUndefined();

```

</details>

<details>

<summary><code>add-gpt-5.4-codex.diff</code></summary>

```diff
diff --git a/content/providers/03-community-providers/13-codex-cli.mdx b/content/providers/03-community-providers/13-codex-cli.mdx
index 93074ae..cc125afb2 100644
--- a/content/providers/03-community-providers/13-codex-cli.mdx
+++ b/content/providers/03-community-providers/13-codex-cli.mdx
@@ -91,7 +91,7 @@ const model = codexCli('gpt-5.2-codex');
 
 **Current Generation Models:**
 
-- **gpt-5.3-codex**: Latest agentic coding model
+- **gpt-5.4-codex**: Latest agentic coding model
 - **gpt-5.2**: Latest general purpose model
 - **gpt-5.1-codex-max**: Flagship model with deep reasoning (supports `xhigh` reasoning)
 - **gpt-5.1-codex-mini**: Lightweight, faster variant
@@ -135,6 +135,7 @@ const model = codexCli('gpt-5.1-codex-max', {
 
 | Model                | Image Input         | Object Generation   | Tool Usage          | Tool Streaming      |
 | -------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
+| `gpt-5.4-codex`      | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
 | `gpt-5.3-codex`      | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
 | `gpt-5.2-codex`      | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
 | `gpt-5.2`            | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
diff --git a/content/providers/03-community-providers/46-codex-app-server.mdx b/content/providers/03-community-providers/46-codex-app-server.mdx
index 402cbe0..02d146be9 100644
--- a/content/providers/03-community-providers/46-codex-app-server.mdx
+++ b/content/providers/03-community-providers/46-codex-app-server.mdx
@@ -184,6 +184,7 @@ const result = await streamText({
 
 | Model                | Image Input         | Object Generation   | Tool Streaming      | Mid-Execution       |
 | -------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
+| `gpt-5.4-codex`      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gpt-5.3-codex`      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gpt-5.2-codex`      | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `gpt-5.1-codex-max`  | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/examples/ai-functions/src/generate-text/openai/gpt-5-4-codex.ts b/examples/ai-functions/src/generate-text/openai/gpt-5-4-codex.ts
new file mode 100644
index 000000000..dd2c887d0
--- /dev/null
+++ b/examples/ai-functions/src/generate-text/openai/gpt-5-4-codex.ts
@@ -0,0 +1,17 @@
+import { openai } from '@ai-sdk/openai';
+import { generateText } from 'ai';
+import { run } from '../../lib/run';
+import { print } from '../../lib/print';
+
+run(async () => {
+  const result = await generateText({
+    model: openai('gpt-5.4-codex'),
+    prompt: 'Write a JavaScript function that returns the sum of two numbers.',
+    maxRetries: 0,
+  });
+
+  print('Text:', result.text);
+  print('Usage:', result.usage);
+  print('Finish reason:', result.finishReason);
+  print('Raw finish reason:', result.rawFinishReason);
+});
diff --git a/examples/ai-functions/src/stream-text/openai/gpt-5-4-codex.ts b/examples/ai-functions/src/stream-text/openai/gpt-5-4-codex.ts
new file mode 100644
index 000000000..624fc1aaf
--- /dev/null
+++ b/examples/ai-functions/src/stream-text/openai/gpt-5-4-codex.ts
@@ -0,0 +1,20 @@
+import { openai } from '@ai-sdk/openai';
+import { streamText } from 'ai';
+import { run } from '../../lib/run';
+import { print } from '../../lib/print';
+
+run(async () => {
+  const result = streamText({
+    model: openai('gpt-5.4-codex'),
+    prompt: 'Write a JavaScript function that returns the sum of two numbers.',
+    maxRetries: 0,
+  });
+
+  for await (const textPart of result.textStream) {
+    process.stdout.write(textPart);
+  }
+
+  process.stdout.write('\n');
+  print('Usage:', await result.usage);
+  print('Finish reason:', await result.finishReason);
+});
diff --git a/packages/gateway/src/gateway-language-model-settings.ts b/packages/gateway/src/gateway-language-model-settings.ts
index 5888f7d..04fcaef51 100644
--- a/packages/gateway/src/gateway-language-model-settings.ts
+++ b/packages/gateway/src/gateway-language-model-settings.ts
@@ -129,6 +129,7 @@ export type GatewayModelId =
   | 'openai/gpt-5.2-codex'
   | 'openai/gpt-5.2-pro'
   | 'openai/gpt-5.3-codex'
+  | 'openai/gpt-5.4-codex'
   | 'openai/gpt-oss-120b'
   | 'openai/gpt-oss-20b'
   | 'openai/gpt-oss-safeguard-20b'
diff --git a/packages/openai/src/responses/openai-responses-options.ts b/packages/openai/src/responses/openai-responses-options.ts
index 1737a9b..ab3b289ec 100644
--- a/packages/openai/src/responses/openai-responses-options.ts
+++ b/packages/openai/src/responses/openai-responses-options.ts
@@ -38,6 +38,7 @@ export const openaiResponsesReasoningModelIds = [
   'gpt-5.2-pro',
   'gpt-5.2-codex',
   'gpt-5.3-codex',
+  'gpt-5.4-codex',
 ] as const;
 
 export const openaiResponsesModelIds = [
@@ -95,6 +96,7 @@ export type OpenAIResponsesModelId =
   | 'gpt-5.2-pro-2025-12-11'
   | 'gpt-5.2-codex'
   | 'gpt-5.3-codex'
+  | 'gpt-5.4-codex'
   | 'gpt-5-2025-08-07'
   | 'gpt-5-chat-latest'
   | 'gpt-5-codex'

```

</details>

<details>

<summary><code>remove-grok-3.diff</code> (partly redacted for
length)</summary>

```diff
diff --git a/content/docs/02-foundations/02-providers-and-models.mdx b/content/docs/02-foundations/02-providers-and-models.mdx
index 804632c..f087e238c 100644
--- a/content/docs/02-foundations/02-providers-and-models.mdx
+++ b/content/docs/02-foundations/02-providers-and-models.mdx
@@ -111,7 +111,6 @@ Here are the capabilities of popular models:
 | Provider                                           | Model                                       | Image Input         | Object Generation   | Tool Usage          | Tool Streaming      |
 | -------------------------------------------------- | ------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
 | [xAI Grok](/providers/ai-sdk-providers/xai)        | `grok-4`                                    | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
-| [xAI Grok](/providers/ai-sdk-providers/xai)        | `grok-3`                                    | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [xAI Grok](/providers/ai-sdk-providers/xai)        | `grok-3-mini`                               | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [xAI Grok](/providers/ai-sdk-providers/xai)        | `grok-2-vision-1212`                        | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Vercel](/providers/ai-sdk-providers/vercel)       | `v0-1.0-md`                                 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/content/providers/01-ai-sdk-providers/01-xai.mdx b/content/providers/01-ai-sdk-providers/01-xai.mdx
index 159a6d3..00aa97419 100644
--- a/content/providers/01-ai-sdk-providers/01-xai.mdx
+++ b/content/providers/01-ai-sdk-providers/01-xai.mdx
@@ -73,10 +73,10 @@ You can use the following optional settings to customize the xAI provider instan
 ## Language Models
 
 You can create [xAI models](https://console.x.ai) using a provider instance. The
-first argument is the model id, e.g. `grok-3`.
+first argument is the model id, e.g. `grok-4`.
 
 ```ts
-const model = xai('grok-3');
+const model = xai('grok-4');
 \`\`\`
 
 By default, `xai(modelId)` uses the Chat API. To use the Responses API with server-side agentic tools, explicitly use `xai.responses(modelId)`.
@@ -90,7 +90,7 @@ import { xai } from '@ai-sdk/xai';
 import { generateText } from 'ai';
 
 const { text } = await generateText({
-  model: xai('grok-3'),
+  model: xai('grok-4'),
   prompt: 'Write a vegetarian lasagna recipe for 4 people.',
 });
 \`\`\`
@@ -777,7 +777,6 @@ console.log('Sources:', await result.sources);
 | `grok-4`                      | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
 | `grok-4-0709`                 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
 | `grok-4-latest`               | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
-| `grok-3`                      | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
 | `grok-3-latest`               | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
 | `grok-3-mini`                 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | `grok-3-mini-latest`          | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/content/providers/01-ai-sdk-providers/index.mdx b/content/providers/01-ai-sdk-providers/index.mdx
index f5a5283..f774b6488 100644
--- a/content/providers/01-ai-sdk-providers/index.mdx
+++ b/content/providers/01-ai-sdk-providers/index.mdx
@@ -21,7 +21,6 @@ Not all providers support all AI SDK features. Here's a quick comparison of the
 | ------------------------------------------------------------------------ | --------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
 | [xAI Grok](/providers/ai-sdk-providers/xai)                              | `grok-4-fast-reasoning`                             | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [xAI Grok](/providers/ai-sdk-providers/xai)                              | `grok-4`                                            | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
-| [xAI Grok](/providers/ai-sdk-providers/xai)                              | `grok-3`                                            | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [xAI Grok](/providers/ai-sdk-providers/xai)                              | `grok-3-mini`                                       | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [xAI Grok](/providers/ai-sdk-providers/xai)                              | `grok-2-vision-1212`                                | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
 | [Vercel](/providers/ai-sdk-providers/vercel)                             | `v0-1.0-md`                                         | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
diff --git a/examples/ai-e2e-next/app/api/chat/xai/route.ts b/examples/ai-e2e-next/app/api/chat/xai/route.ts
index a9af856..2507f600e 100644
--- a/examples/ai-e2e-next/app/api/chat/xai/route.ts
+++ b/examples/ai-e2e-next/app/api/chat/xai/route.ts
@@ -7,7 +7,7 @@ export async function POST(req: Request) {
   const { messages }: { messages: UIMessage[] } = await req.json();
 
   const result = streamText({
-    model: xai('grok-3'),
+    model: xai('grok-4'),
     messages: await convertToModelMessages(messages),
   });
 
diff --git a/examples/ai-functions/src/e2e/xai.test.ts b/examples/ai-functions/src/e2e/xai.test.ts
index 7e303c5..5bdd2e855 100644
--- a/examples/ai-functions/src/e2e/xai.test.ts
+++ b/examples/ai-functions/src/e2e/xai.test.ts
@@ -27,9 +27,7 @@ createFeatureTestSuite({
       createChatModel('grok-3-fast-beta'),
       createChatModel('grok-3-mini-beta'),
       createChatModel('grok-3-mini-fast-beta'),
-      createChatModel('grok-3'),
       createChatModel('grok-2-vision-1212'),
-      createCompletionModel('grok-3'),
       createCompletionModel('grok-2-vision-1212'),
     ],
   },
diff --git a/examples/ai-functions/src/generate-text/gateway/tool-call.ts b/examples/ai-functions/src/generate-text/gateway/tool-call.ts
index c73ec54..87b1503de 100644
--- a/examples/ai-functions/src/generate-text/gateway/tool-call.ts
+++ b/examples/ai-functions/src/generate-text/gateway/tool-call.ts
@@ -5,7 +5,7 @@ import { run } from '../../lib/run';
 
 run(async () => {
   const result = await generateText({
-    model: 'xai/grok-3',
+    model: 'xai/grok-4',
     maxOutputTokens: 512,
     tools: {
       weather: weatherTool,
diff --git a/examples/ai-functions/src/generate-text/xai/responses-usage-full.ts b/examples/ai-functions/src/generate-text/xai/responses-usage-full.ts
index 79009ee..0358d0dda 100644
--- a/examples/ai-functions/src/generate-text/xai/responses-usage-full.ts
+++ b/examples/ai-functions/src/generate-text/xai/responses-usage-full.ts
@@ -9,7 +9,6 @@ const models = [
   'grok-4-fast-reasoning',
   'grok-4-fast-non-reasoning',
   'grok-code-fast-1',
-  'grok-3',
   'grok-3-mini',
 ];
 
diff --git a/examples/ai-functions/src/generate-text/xai/usage-full.ts b/examples/ai-functions/src/generate-text/xai/usage-full.ts
index f8605ab..eb05ea8df 100644
--- a/examples/ai-functions/src/generate-text/xai/usage-full.ts
+++ b/examples/ai-functions/src/generate-text/xai/usage-full.ts
@@ -9,7 +9,6 @@ const models = [
   'grok-4-fast-reasoning',
   'grok-4-fast-non-reasoning',
   'grok-code-fast-1',
-  'grok-3',
   'grok-3-mini',
 ];
 
diff --git a/examples/ai-functions/src/stream-text/gateway/output-object.ts b/examples/ai-functions/src/stream-text/gateway/output-object.ts
index 82c5050..a42f8f574 100644
--- a/examples/ai-functions/src/stream-text/gateway/output-object.ts
+++ b/examples/ai-functions/src/stream-text/gateway/output-object.ts
@@ -4,7 +4,7 @@ import { run } from '../../lib/run';
 
 run(async () => {
   const result = streamText({
-    model: 'xai/grok-3',
+    model: 'xai/grok-4',
     output: Output.object({
       schema: z.object({
         characters: z.array(
diff --git a/examples/ai-functions/src/stream-text/xai/raw-chunks.ts b/examples/ai-functions/src/stream-text/xai/raw-chunks.ts
index 11aac82..b1cc0d691 100644
--- a/examples/ai-functions/src/stream-text/xai/raw-chunks.ts
+++ b/examples/ai-functions/src/stream-text/xai/raw-chunks.ts
@@ -4,7 +4,7 @@ import { run } from '../../lib/run';
 
 run(async () => {
   const result = streamText({
-    model: xai('grok-3'),
+    model: xai('grok-4'),
     prompt: 'Count from 1 to 3 slowly.',
     includeRawChunks: true,
   });
diff --git a/examples/ai-functions/src/stream-text/xai/responses-usage-full.ts b/examples/ai-functions/src/stream-text/xai/responses-usage-full.ts
index a91530e..874cd9270 100644
--- a/examples/ai-functions/src/stream-text/xai/responses-usage-full.ts
+++ b/examples/ai-functions/src/stream-text/xai/responses-usage-full.ts
@@ -9,7 +9,6 @@ const models = [
   'grok-4-fast-reasoning',
   'grok-4-fast-non-reasoning',
   'grok-code-fast-1',
-  'grok-3',
   'grok-3-mini',
 ];
 
diff --git a/examples/ai-functions/src/stream-text/xai/usage-full.ts b/examples/ai-functions/src/stream-text/xai/usage-full.ts
index f43b0f8..b2cdde6a6 100644
--- a/examples/ai-functions/src/stream-text/xai/usage-full.ts
+++ b/examples/ai-functions/src/stream-text/xai/usage-full.ts
@@ -9,7 +9,6 @@ const models = [
   'grok-4-fast-reasoning',
   'grok-4-fast-non-reasoning',
   'grok-code-fast-1',
-  'grok-3',
   'grok-3-mini',
 ];
 
diff --git a/packages/gateway/src/gateway-language-model-settings.ts b/packages/gateway/src/gateway-language-model-settings.ts
index 5888f7d..943d0c1db 100644
--- a/packages/gateway/src/gateway-language-model-settings.ts
+++ b/packages/gateway/src/gateway-language-model-settings.ts
@@ -146,7 +146,6 @@ export type GatewayModelId =
   | 'vercel/v0-1.0-md'
   | 'vercel/v0-1.5-md'
   | 'xai/grok-2-vision'
-  | 'xai/grok-3'
   | 'xai/grok-3-fast'
   | 'xai/grok-3-mini'
   | 'xai/grok-3-mini-fast'
diff --git a/packages/openai-compatible/src/chat/__snapshots__/openai-compatible-chat-language-model.test.ts.snap b/packages/openai-compatible/src/chat/__snapshots__/openai-compatible-chat-language-model.test.ts.snap
index 8fa869b..0080fdf40 100644
--- a/packages/openai-compatible/src/chat/__snapshots__/openai-compatible-chat-language-model.test.ts.snap
+++ b/packages/openai-compatible/src/chat/__snapshots__/openai-compatible-chat-language-model.test.ts.snap
@@ -48,7 +48,7 @@ So, my response will be: Grok",
     },
   },
   "request": {
-    "body": "{"model":"grok-3","messages":[{"role":"user","content":"Hello"}]}",
+    "body": "{"model":"grok-4","messages":[{"role":"user","content":"Hello"}]}",
   },
   "response": {
     "body": {
@@ -193,7 +193,7 @@ Finally, structure it as: <function_call>{"action": "weather", "action_input": {
     },
   },
   "request": {
-    "body": "{"model":"grok-3","messages":[{"role":"user","content":"Hello"}]}",
+    "body": "{"model":"grok-4","messages":[{"role":"user","content":"Hello"}]}",
   },
   "response": {
     "body": {
diff --git a/packages/openai-compatible/src/chat/openai-compatible-chat-language-model.test.ts b/packages/openai-compatible/src/chat/openai-compatible-chat-language-model.test.ts
index 295261c..f8283c143 100644
--- a/packages/openai-compatible/src/chat/openai-compatible-chat-language-model.test.ts
+++ b/packages/openai-compatible/src/chat/openai-compatible-chat-language-model.test.ts
@@ -21,7 +21,7 @@ const provider = createOpenAICompatible({
   },
 });
 
-const model = provider('grok-3');
+const model = provider('grok-4');
 
 const server = createTestServer({
   'https://my.api.com/v1/chat/completions': {},
@@ -105,7 +105,7 @@ describe('doGenerate', () => {
     finish_reason = 'stop',
     id = 'chatcmpl-95ZTZkhr0mHNKqerQfiwkuox3PHAd',
     created = 1711115037,
-    model = 'grok-3',
+    model = 'grok-4',
     headers,
   }: {
     content?: string;
@@ -278,7 +278,7 @@ describe('doGenerate', () => {
 
   it('should pass user setting to requests', async () => {
     prepareJsonResponse({ content: 'Hello, World!' });
-    const modelWithUser = provider('grok-3');
+    const modelWithUser = provider('grok-4');
     await modelWithUser.doGenerate({
       prompt: TEST_PROMPT,
       providerOptions: {
@@ -295,7 +295,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
       }
     `);
   });
@@ -407,7 +407,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
       }
     `);
   });
@@ -415,7 +415,7 @@ describe('doGenerate', () => {
   it('should pass settings', async () => {
     prepareJsonResponse();
 
-    await provider('grok-3').doGenerate({
+    await provider('grok-4').doGenerate({
       prompt: TEST_PROMPT,
       providerOptions: {
         openaiCompatible: {
@@ -432,7 +432,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
         "user": "test-user-id",
       }
     `);
@@ -441,7 +441,7 @@ describe('doGenerate', () => {
   it('should pass settings with deprecated openai-compatible key and emit warning', async () => {
     prepareJsonResponse();
 
-    const result = await provider('grok-3').doGenerate({
+    const result = await provider('grok-4').doGenerate({
       prompt: TEST_PROMPT,
       providerOptions: {
         'openai-compatible': {
@@ -458,7 +458,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
         "user": "test-user-id",
       }
     `);
@@ -472,7 +472,7 @@ describe('doGenerate', () => {
   it('should include provider-specific options', async () => {
     prepareJsonResponse();
 
-    await provider('grok-3').doGenerate({
+    await provider('grok-4').doGenerate({
       providerOptions: {
         'test-provider': {
           someCustomOption: 'test-value',
@@ -489,7 +489,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
         "someCustomOption": "test-value",
       }
     `);
@@ -498,7 +498,7 @@ describe('doGenerate', () => {
   it('should not include provider-specific options for different provider', async () => {
     prepareJsonResponse();
 
-    await provider('grok-3').doGenerate({
+    await provider('grok-4').doGenerate({
       providerOptions: {
         notThisProviderName: {
           someCustomOption: 'test-value',
@@ -515,7 +515,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
       }
     `);
   });
@@ -552,7 +552,7 @@ describe('doGenerate', () => {
             "role": "user",
           },
         ],
-        "model": "grok-3",
+        "model": "grok-4",
         "tool_choice": {
           "function": {
             "name": "test-tool",
@@ -596,7 +596,7 @@ describe('doGenerate', () => {
       },
     });
 
-    await provider('grok-3').doGenerate({
+    await provider('grok-4').doGenerate({
       prompt: TEST_PROMPT,
       headers: {
         'Custom-Request-Header': 'request-header-value',
@@ -1283,7 +1283,7 @@ describe('doGenerate', () => {
 
     expect(request).toMatchInlineSnapshot(`
       {
-        "body": "{"model":"grok-3","messages":[{"role":"user","content":"Hello"}]}",
+        "body": "{"model":"grok-4","messages":[{"role":"user","content":"Hello"}]}",
       }
     `);
   });
@@ -1421,7 +1421,7 @@ describe('doGenerate', () => {
           id: 'chatcmpl-test',
           object: 'chat.completion',
           created: 1711115037,
-          model: 'grok-3',
+          model: 'grok-4',
           choices: [
             {
               index: 0,
diff --git a/packages/xai/src/xai-chat-options.ts b/packages/xai/src/xai-chat-options.ts
index e523c79..b441ce3db 100644
--- a/packages/xai/src/xai-chat-options.ts
+++ b/packages/xai/src/xai-chat-options.ts
@@ -10,7 +10,6 @@ export type XaiChatModelId =
   | 'grok-4'
   | 'grok-4-0709'
   | 'grok-4-latest'
-  | 'grok-3'
   | 'grok-3-latest'
   | 'grok-3-mini'
   | 'grok-3-mini-latest'

```

</details>

## Manual Verification

We should use this skill going forward for those model update issues. We
can iterate on the skill as needed, if we find quirks to iron out.

**Note:** Some work will probably always be manual, careful review
continues to be required. For example, especially for a completely new
model, it may be difficult or even impossible for the agent to find out
whether e.g. Amazon Bedrock or Google Vertex supports it or not.

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

N/A

---------

Co-authored-by: Gregor Martynus <[email protected]>
vercel-ai-sdk bot and others added 30 commits March 19, 2026 13:18
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

-   f0b0b20: feat(ai): add per-tool timeout overrides via toolTimeouts

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f0b0b20]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…n into create-stream-text-part-transformation (#13607)

## Background

We are working towards better support for custom loop control. For this,
we are separating out individual reusable functions from streamText. As
a first step, we need to separate transformation steps such that each
transformation has a single responsibility.

## Summary

Move tool-call, tool-approval, tool-result transformation into
create-stream-text-part-transformation.

## Related Issues

towards #13570
## Summary
- allow `data:` URLs through `validateDownloadUrl` because they are
inline content, not network fetches
- keep the existing SSRF protections for `http:` and `https:` URLs
unchanged
- add tests covering both validator acceptance and `download()` support
for inline data URLs

## Testing
- `pnpm exec prettier --check
packages/provider-utils/src/validate-download-url.ts
packages/provider-utils/src/validate-download-url.test.ts
packages/ai/src/util/download/download.test.ts`
- `pnpm install --frozen-lockfile`
- Attempted package-local `vitest` runs, but the local workspace
currently fails to resolve some internal package entries before reaching
these tests under Node `v24.14.0`

## Why This Is Small And Safe
This only changes protocol handling for `data:` URLs, which are already
inline payloads and do not make outbound network requests. All existing
hostname and private-address SSRF checks still apply to real network
URLs.

Closes #13354.

Co-authored-by: Gregor Martynus <[email protected]>
## Background

openai recently introduced server side compaction within responses api

see https://developers.openai.com/api/docs/guides/compaction

## Summary

updated the api spec to include the `compaction` block which is passed
via the providerMetadata.
it only includes encrypted content id along with it, so it doesn't need
any text along with it.

## Manual Verification

- verified via
`examples/ai-functions/src/generate-text/openai-compaction.ts`
- verified via
`examples/ai-functions/src/stream-text/openai-compaction.ts`
- verified via `localhost:3000/use-chat-openai-compaction` with
multiturn conversation

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

look into adding compaction endpoint
https://developers.openai.com/api/reference/resources/responses/methods/compact

## Related Issues

fixes #12486
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [d9a1e9a]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   d9a1e9a: feat(openai): add server side compaction for openai

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…3629)

## Background

We are working towards better support for custom loop control. For this,
we are separating out individual reusable functions from streamText. As
a first step, we need to separate transformation steps such that each
transformation has a single responsibility.

## Summary

* remove unused parameters
* renamed `runToolsTransform` to `executeToolsTransform`
* moved finish part transformation

## Related Issues

towards #13570
## Background

The lint-staged config in package.json uses "*" which matches all files.
When only a .md file (like a changeset) is staged, ultracite fix
receives no JS/TS files it can process and exits with an error, blocking
the commit

Error received: 

```sh
ai % git add .changeset/great-cows-rush.md && git commit -m "change cs"   
✔ Backed up original state in git stash (ea29d32ee)
⚠ Running tasks for staged files...
  ❯ package.json — 1 file
    ❯ * — 1 file
      ✖ ultracite fix [FAILED]
↓ Skipped because of errors from tasks.
✔ Reverting to original state because of errors...
✔ Cleaning up temporary files...

✖ ultracite fix:
Expected at least one target file
husky - pre-commit script failed (code 1)
```

## Summary

narrow the lint-staged glob to only match file types ultracite handles

## Manual Verification

na

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Background

We are working towards better support for custom loop control. For this,
we are separating out individual reusable functions from streamText. As
a first step, we need to separate transformation steps such that each
transformation has a single responsibility.

## Summary

- replace `SingleRequestTextStreamPart` with
`UglyTransformedStreamTextPart`
- simplify `UglyTransformedStreamTextPart` type
- introduce separate `TextStream*Part` types

## Related Issues

Towards #13570
…upport it in `generateText` and `streamText` (#13553)

## Background

Reasoning/thinking configuration has historically been handled entirely
via `providerOptions`, requiring provider-specific knowledge from
callers and making it impossible to write portable reasoning code.

## Summary

Adds a top-level `reasoning` parameter to `generateText` and
`streamText` (and the underlying `LanguageModelV4` call options spec),
as proposed in #12516. The parameter accepts a flat enum —
`'provider-default' | 'none' | 'minimal' | 'low' | 'medium' | 'high' |
'xhigh'` — aligned with the OpenAI/OpenRouter convention. For the v4
spec, `'provider-default'` is omitted as that can be resolved at the
`generateText` and `streamText` level by simply omitting the value.

Existing `providerOptions` for each provider remain supported, both to
help with a smooth transition path and to continue to support cases
where a specific `providerOptions` behavior may be more granular than
what the new top-level `reasoning` parameter allows.

If reasoning-related keys are present in `providerOptions`, they take
full precedence and the top-level `reasoning` parameter is ignored, so
existing code continues to work without changes.

Two helper functions are added to `provider-utils` to make provider-side
mapping straightforward:

- `mapReasoningToProviderEffort` — maps the enum to a provider's native
effort string, emitting a compatibility warning if coercion is needed.
- `mapReasoningToProviderBudget` — maps the enum to a token budget by
multiplying the model's max output tokens by a percentage, clamped
between a min and max budget.

### Provider migration status

- [x] OpenAI — effort-based
- [x] Anthropic — mixed (effort-based for newer models)
- [x] Google — mixed (effort-based for newer models)
- [x] Amazon Bedrock — mixed
- [ ] xAI — effort-based
- [ ] Groq — effort-based
- [ ] DeepSeek — budget-based
- [ ] Fireworks — effort-based
- [ ] OpenAI Compatible — effort-based
- [ ] Open Responses — effort-based
- [ ] Perplexity — no reasoning support
- [ ] Alibaba — no reasoning support
- [ ] Azure — no reasoning support
- [ ] Mistral — no reasoning support
- [ ] Cohere — no reasoning support

## Manual Verification

I ran all the relevant updated examples for each migrated provider,
verifying they still work as before.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

- Migrate remaining providers (potentially via this PR, or in a follow
up PR)
- Consider deprecating use of certain `providerOptions` values that
provide no benefit over the new top-level `reasoning` parameter

## Related Issues

Fixes #12516
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`

## @ai-sdk/[email protected]

### Patch Changes

- 3887c70: feat(provider): add new top-level reasoning parameter to spec
and support it in `generateText` and `streamText`
-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [3887c70]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
… is finished (#13638)

## Background
We are working towards better support for custom loop control.
Until now, `streamText` starts tool execution immediately when a tool
call arrives.
While this enables tools to be executed without delay, it prevents clear
separation of model calls from tool execution, which is required for
external loop control, e.g. in the Workflow SDK.

## Summary

Delay tool execution until model call sends `finish` chunk (**breaking
behavior change**)

## Related Issues

Towards #13570
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Major Changes

- b9cf502: refactoring(ai): delay tool execution in stream text until
model call is finished

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [b9cf502]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Summary

Freshened up the README to use latest models, and more straightforward
examples. Also added tags to npm package.
## Background

as a follow up to this PR #13478,
specifically [this
comment](#13478 (comment)),
there was uneccessary convolution of the data

## Summary

- flatten the `model: {provider, modelId}` to be `provider` and
`modelId` individually
- rename `callback-events.ts` to `core-events.ts`

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

---------

Co-authored-by: Gregor Martynus <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

-   877bf12: fix(ai): flatten model attributes for telemetry

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [877bf12]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
follow up to #13636

---------

Co-authored-by: Tmo <[email protected]>
Co-authored-by: Thibault Miranda de Oliveira <[email protected]>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

-   f5a6f89: README updates

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [f5a6f89]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
#13648)

## Background

The new top-level `reasoning` parameter was added to the AI SDK spec and
core in #13553. Providers need to be migrated to translate this
parameter into their native reasoning/thinking configuration.

## Summary

Migrates 7 providers to support the new top-level `reasoning` parameter:

- **deepseek**: Maps `reasoning` to existing thinking support (already
had infrastructure, just needed wiring)
- **groq**: Maps `reasoning` to `reasoning_effort` in provider options
- **xai**: Maps `reasoning` to `reasoning_effort` for both chat and
responses models
- **openai-compatible**: Maps `reasoning` to `reasoning_effort` in
provider options
- **open-responses**: Maps `reasoning` to `reasoning.effort` in the
responses format
- **alibaba**: Maps `reasoning` to `enable_thinking` + `thinking_budget`
via token budget calculation
- **cohere**: Maps `reasoning` to `thinking.type` +
`thinking.token_budget` via token budget calculation
- **fireworks**: Uses **openai-compatible**

Together with #13649, this completes the work on #12516.

## Manual Verification

Relevant examples were updated or added, then run for verification.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

See #12516.
…rameter (#13649)

## Background

A new top-level `reasoning` parameter was added to the AI SDK spec in
#13553 and is supported in `generateText`/`streamText`. Providers that
don't natively support reasoning configuration need to emit an
unsupported warning when a custom reasoning value is passed, rather than
silently ignoring it.

## Summary

- Added unsupported-feature warnings to `perplexity`, `mistral`, and
`prodia` providers when `isCustomReasoning(reasoning)` returns `true`
- Added documentation to `architecture/provider-abstraction.md`
explaining how providers should handle the `reasoning` parameter (effort
mapping, budget mapping, or unsupported warning).

Together with #13648, this completes the work on #12516.

## Manual Verification

N/A

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

See #12516.
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter
-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter
-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 5259a95: chore: add warning for providers that do not support new
reasoning parameter

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter

## @ai-sdk/[email protected]

### Patch Changes

- 5259a95: chore: add warning for providers that do not support new
reasoning parameter

## @ai-sdk/[email protected]

### Patch Changes

- 5259a95: chore: add warning for providers that do not support new
reasoning parameter

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 74d520f: feat: migrate providers to support new top-level `reasoning`
parameter
-   Updated dependencies [74d520f]
    -   @ai-sdk/[email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…ively unused there (#13641)

## Background

In #13536, `CallSettings<TOOLS>` was made generic solely because of
`timeout` (`TimeoutConfiguration<TOOLS>` uses tool-specific keys). I
looked into making `prepareCallSettings` generic too as a result (see
https://github.com/vercel/ai/pull/13536/changes#r2961226981), but that
led me to a more important finding: `timeout` is never accessed through
a `CallSettings`-typed variable — it's always destructured independently
and handled via standalone helper functions. This generic propagates
through many files unnecessarily, and `timeout` being part of
`CallSettings` is probably no longer relevant.

## Summary

- Remove `timeout` from `CallSettings` and make it non-generic. In
`generateText`, `streamText`, and `ToolLoopAgentSettings`, `timeout` is
added as a standalone property in the object literal type, preserving
the public API while keeping the tool-specific generic only where it's
actually needed.
- Remove dead timeout-handling code in `getBaseTelemetryAttributes` that
was unreachable since no caller ever passed `timeout` in the settings
object.

Given that `CallSettings` is exported, this can be considered a
low-severity breaking change. But we're working in the v7 branch, so
this is fine.

## Manual Verification

N/A — internal type refactor, verified via type checking and existing
test suites.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

- A few places didn't actually support passing through `timeout` which
maybe should allow it (e.g. `streamUI()`)
- before it wasn't possible because `prepareCallSettings` would strip
`timeout`, now `timeout` is formally removed from `CallSettings`
- possibly something to fix, but worth noting this PR doesn't make it
worse
- Consider breaking out `abortSignal`, `headers`, and `maxRetries` from
`CallSettings` as well.
- Potentially create a separate `RequestSettings` type that contains
these 3, plus `timeout`. Could then also rename `CallSettings` to
something more descriptive, e.g. `ModelCallSettings`.

## Related Issues

N/A
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

- e79e644: chore(ai/core): remove `timeout` from `CallSettings` as it
was effectively unused there

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- e79e644: chore(ai/core): remove `timeout` from `CallSettings` as it
was effectively unused there
-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [e79e644]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…ion and forward chunks before callback invocation (#13681)

## Background

We are working towards better support for custom loop control. For this,
we are separating out individual reusable functions from streamText. As
a first step, we need to separate transformation steps such that each
transformation has a single responsibility.

## Summary

- move invocation of tool callbacks into its own function
- this has a breaking behavior change: before, tool callbacks have been
invoked before a chunk was forwarded. Now the order flipped.

## Related issues

towards #13570
…n responses (#12777)

<!--
Welcome to contributing to AI SDK! We're excited to see your changes.

We suggest you read the following contributing guide we've created
before submitting:

https://github.com/vercel/ai/blob/main/CONTRIBUTING.md
-->

## Background

<!-- Why was this change necessary? -->

Google provider tool results with `output.type = 'content'` were not
fully mapped into Gemini `functionResponse.parts`, which blocked
reliable multimodal tool-result flows for images/files.

## Summary

<!-- What did you change? -->

This PR adds multimodal tool-result support for `@ai-sdk/google`
function responses and documents provider limitations.

### Provider changes
- Updated
`packages/google/src/convert-to-google-generative-ai-messages.ts` to:
- Map `image-data` and `file-data` tool-result parts into
`functionResponse.parts` as `inlineData`.
- Support URL-style parts (`image-url`, `file-url`) **only** when they
are base64 `data:` URLs.
- Throw `UnsupportedFunctionalityError` for remote HTTP(S) URL-style
tool-result parts.

- Updated `packages/google/src/google-generative-ai-prompt.ts` types to
allow:
  - `functionResponse.parts` with inline media payloads.

### Tests
- Added/updated tests in
`packages/google/src/convert-to-google-generative-ai-messages.test.ts`
for:
  - `image-data` mapping
  - `file-data` mapping
  - `image-url` base64 `data:` URL mapping
  - Error on non-data `image-url`
  - Error on non-data `file-url`

### Docs
- Updated `content/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx`:
  - Added Google support note (Gemini 3 models).
- Documented that Google tool-result URL-style media must use base64
`data:` URLs (remote HTTP(S) URLs are not supported).

### Examples
- Replaced combined toggle example with focused scripts:
-
`examples/ai-functions/src/generate-text/google-image-tool-result-base64.ts`
-
`examples/ai-functions/src/generate-text/google-image-tool-result-url.ts`
-
`examples/ai-functions/src/generate-text/google-pdf-tool-results-base64.ts`
-
`examples/ai-functions/src/generate-text/google-pdf-tool-results-url.ts`
- Removed:
- `examples/ai-functions/src/generate-text/google-image-tool-results.ts`

### Changeset
- Added `.changeset/google-multimodal-tool-results.md` (patch for
`@ai-sdk/google`).

## Manual Verification

<!--
For features & bugfixes.
Please explain how you *manually* verified that the change works
end-to-end as expected (excluding automated tests).
Remove the section if it's not needed (e.g. for docs).
-->

- Ran targeted provider tests:
- `pnpm --filter @ai-sdk/google test:node --
convert-to-google-generative-ai-messages.test.ts`
- Ran type checks:
  - `pnpm --filter @ai-sdk/google type-check`
  - `pnpm --filter @example/ai-functions type-check`
- Ran lint/format checks on changed files:
  - `pnpm exec eslint ...`
  - `pnpm exec prettier --check ...`

## Checklist

<!--
Do not edit this list. Leave items unchecked that don't apply. If you
need to track subtasks, create a new "## Tasks" section

Please check if the PR fulfills the following requirements:
-->

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

<!--
Feel free to mention things not covered by this PR that can be done in
future PRs.
Remove the section if it's not needed.
 -->

## Related Issues

<!--
List related issues here, e.g. "Fixes #1234".
Remove the section if it's not needed.
-->

N/A

---------

Co-authored-by: Felix Arntz <[email protected]>
…ral-small-latest) (#13688)

## Background

Mistral Small 4 (`mistral-small-2603`) supports a `reasoning_effort` API
parameter (`"high"` / `"none"`). The mistral provider already handles
reasoning **output** (thinking blocks from magistral-* models), but does
not support reasoning configuration yet.

## Summary

- Maps the AI SDK's top-level `reasoning` parameter (see #12516) to
Mistral's `reasoning_effort` for models that support it
(`mistral-small-latest`, `mistral-small-2603`). Also adds a
`reasoningEffort` provider option for direct control. Non-supporting
models continue to emit an unsupported warning, as before.
- Since Mistral only supports `"high"` and `"none"`, all reasoning
levels (`minimal` through `xhigh`) map to `"high"` with a compatibility
warning for non-exact matches, similar pattern to other providers.
- Updates the Mistral model ID list with the new model ID and a few
other missing flagship models and removing obsolete models (verified via
their API).
- Adds documentation and examples.

## Manual Verification

Run the new examples to verify.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

Fixes #13595
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Major Changes

- 4b46062: refactoring(ai): extract tool callback invocation into
separate function and forward chunks before callback invocation

### Patch Changes

-   Updated dependencies [165b97a]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 165b97a: chore(provider/gateway): update gateway model settings files

## @ai-sdk/[email protected]

### Patch Changes

- 18c1970: feat(provider/google): Add multimodal tool-result support for
Google function responses.

    Tool results with `output.type = 'content'` now map media parts into
    `functionResponse.parts` for Google models, including `image-data`,
    `file-data`, and base64 `data:` URLs in URL-style content parts.
Remote HTTP(S) URLs in URL-style tool-result parts are not supported.

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [18c1970]
    -   @ai-sdk/[email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

- 737b8f4: feat(provider/mistral): add support for reasoning
configuration (mistral-small-latest)

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [4b46062]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

In order to completely decouple OTel from the `ai` package, we need to
align all the core functions to emit event data at each step for each of
the functions.

We made similar changes to generateText and streamText when we
introduced `experimental_*` (onStart, onStepStart, onFinish etc) and the
same changes/callbacks needed to be added for the rerank function

## Summary

- introduce the `onStart` and `onFinish` callbacks (experimental)
- create new interfaces for both of those events for proper type safety

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

decouple otel from the rerank function and move integration to
`open-telemetry-integration`
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## [email protected]

### Patch Changes

- caf1b6f: feat(ai): introduce experimental callbacks for rerank
function

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

## @ai-sdk/[email protected]

### Patch Changes

-   Updated dependencies [caf1b6f]
    -   [email protected]

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.