Skip to content

Commit d0f4fd3

Browse files
committed
[Agents] Revert surviving disconnections example back to Workers AI
1 parent 962fa8f commit d0f4fd3

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

src/content/docs/agents/concepts/calling-llms.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -48,22 +48,22 @@ This means the client does not need to send the full conversation on every messa
4848

4949
Reasoning models like DeepSeek R1 or GLM-4 can take 30 seconds to several minutes to respond. In a stateless request-response architecture, the client must stay connected the entire time. If the connection drops, the response is lost.
5050

51-
An Agent keeps running after the client disconnects. When the response arrives, the Agent can persist it to state and deliver it when the client reconnects — even hours or days later. This works with any provider — the example below uses OpenAI, but you can substitute any AI SDK-compatible model.
51+
An Agent keeps running after the client disconnects. When the response arrives, the Agent can persist it to state and deliver it when the client reconnects — even hours or days later.
5252

5353
<TypeScriptExample>
5454

5555
```ts
5656
import { Agent } from "agents";
5757
import { streamText } from "ai";
58-
import { createOpenAI } from "@ai-sdk/openai";
58+
import { createWorkersAI } from "workers-ai-provider";
5959

6060
export class MyAgent extends Agent<Env> {
6161
async onMessage(connection: Connection, message: WSMessage) {
6262
const { prompt } = JSON.parse(message as string);
63-
const openai = createOpenAI({ apiKey: this.env.OPENAI_API_KEY });
63+
const workersai = createWorkersAI({ binding: this.env.AI });
6464

6565
const result = streamText({
66-
model: openai("gpt-5.2"),
66+
model: workersai("@cf/meta/llama-4-scout-17b-16e-instruct"),
6767
prompt,
6868
});
6969

0 commit comments

Comments
 (0)