-
|
In some cases, I want to send a fixed string (e.g., when user exceeds some rate limit for chat, from the backend i would like to send I tried to figure out if there is an easy to construct export async function POST({ locals, request, params }) {
const user = await locals.optionalUser();
if(isRateLimited(user)) {
// FIXME: what to return here?
}
const { messages } = await request.json() as {
messages: { role: "user" | "assistant", content: string }[]
};
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
stream: true,
messages: [
{
role: "system",
content: "<my-prompt>.",
},
...messages,
]
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
} |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
I figured out I can do this: const getStream = (text: string) => {
const encoder = new TextEncoder();
const encodedChunk = encoder.encode(text);
return new ReadableStream({
start(controller) {
controller.enqueue(encodedChunk);
controller.close();
}
});
}Let me know if there are better way to send errors. |
Beta Was this translation helpful? Give feedback.
-
|
This discussion was automatically locked because it has not been updated in over 30 days. If you still have questions about this topic, please ask us at community.vercel.com/ai-sdk |
Beta Was this translation helpful? Give feedback.
I figured out I can do this:
Let me know if there are better way to send errors.