Skip to content
16 changes: 10 additions & 6 deletions ai-prompts/chat-name.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
You are an AI assistant on the OpenOps platform, where users interact about FinOps, cloud providers (AWS, Azure, GCP), OpenOps features, and workflow automation.

Your task:
Given the following conversation, suggest a short, descriptive name (max five words) that best summarizes the main topic, question, or action discussed in this chat.
Task:
Analyze the provided conversation and attempt to produce a concise chat name describing the main topic, question, or action.

Guidelines:
- The name should be specific (not generic like "Chat" or "Conversation"), and reflect the user's intent (e.g., "AWS Cost Optimization", "Create Budget Workflow", "OpenOps Integration Help").
- Limit the name to five words or less.
- Respond with only the chat name.
Rules:
- If you can confidently produce a specific, helpful name (not generic like "Chat" or "Conversation"), set `isGenerated` to true and provide `name`.
- The `name` must be five words or fewer
- If there is insufficient information, the content is unclear, or you cannot determine a good name, set `isGenerated` to false.

Notes:
- Keep the name short and specific.
- Avoid quotes, punctuation-heavy outputs, or trailing spaces in the name.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import { api } from '@/app/lib/api';
import { ListChatsResponse } from '@openops/shared';
import { GeneratedChatName, ListChatsResponse } from '@openops/shared';

export const aiAssistantChatHistoryApi = {
list() {
Expand All @@ -9,7 +9,7 @@ export const aiAssistantChatHistoryApi = {
return api.delete<void>(`/v1/ai/conversation/${chatId}`);
},
generateName(chatId: string) {
return api.post<{ chatName: string }>('/v1/ai/conversation/chat-name', {
return api.post<GeneratedChatName>('/v1/ai/conversation/chat-name', {
chatId,
});
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,10 @@ export const useAssistantChat = ({
if (messagesRef.current.length >= MIN_MESSAGES_BEFORE_NAME_GENERATION) {
try {
hasAttemptedNameGenerationRef.current[chatId] = true;
await aiAssistantChatHistoryApi.generateName(chatId);
const result = await aiAssistantChatHistoryApi.generateName(chatId);
if (!result.isGenerated) {
hasAttemptedNameGenerationRef.current[chatId] = false;
}
qc.invalidateQueries({ queryKey: [QueryKeys.assistantHistory] });
} catch (error) {
console.error('Failed to generate chat name', error);
Expand Down
45 changes: 31 additions & 14 deletions packages/server/api/src/app/ai/chat/ai-chat.service.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import {
AiAuth,
getAiModelFromConnection,
getAiProviderLanguageModel,
isLLMTelemetryEnabled,
} from '@openops/common';
import {
AppSystemProp,
Expand All @@ -14,14 +15,19 @@ import {
ApplicationError,
CustomAuthConnectionValue,
ErrorCode,
GeneratedChatName,
removeConnectionBrackets,
} from '@openops/shared';
import { LanguageModel, ModelMessage, UIMessage, generateText } from 'ai';
import { generateObject, LanguageModel, ModelMessage, UIMessage } from 'ai';
import { z } from 'zod';
import { appConnectionService } from '../../app-connection/app-connection-service/app-connection-service';
import { aiConfigService } from '../config/ai-config.service';
import { loadPrompt } from './prompts.service';
import { Conversation } from './types';
import { mergeToolResultsIntoMessages } from './utils';
import {
mergeToolResultsIntoMessages,
sanitizeMessagesForChatName,
} from './utils';

const chatContextKey = (
chatId: string,
Expand Down Expand Up @@ -84,28 +90,39 @@ export const generateChatIdForMCP = (params: {
});
};

const generatedChatNameSchema = z.object({
name: z
.string()
.max(100)
.nullable()
.describe('Conversation name or null if it was not generated'),
isGenerated: z.boolean().describe('Whether the name was generated or not'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rSnapkoOpenOps why do you need isGenerated?
you can deduce that from checking if (name). The less logic the LLM needs to do, the better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've tried that, but ai still adds dummy names

});

export async function generateChatName(
messages: ModelMessage[],
projectId: string,
): Promise<string> {
const { languageModel } = await getLLMConfig(projectId);
): Promise<GeneratedChatName> {
const { languageModel, aiConfig } = await getLLMConfig(projectId);
const systemPrompt = await loadPrompt('chat-name.txt');
if (!systemPrompt.trim()) {
throw new Error('Failed to load prompt to generate the chat name.');
}
const prompt: ModelMessage[] = [
{
role: 'system',
content: systemPrompt,
} as const,
...messages,
];
const response = await generateText({

const sanitizedMessages: ModelMessage[] =
sanitizeMessagesForChatName(messages);

const result = await generateObject({
Copy link
Contributor

@alexandrudanpop alexandrudanpop Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As LLMs don't always generate correct structured output, you can make it a bit more reliable using the approach below

experimental_repairText: async ({ text }) => repairText(text),

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, you could wrap in try catch and return {isGenerated: false} in case there is an error and log the error

model: languageModel,
messages: prompt,
system: systemPrompt,
messages: sanitizedMessages,
schema: generatedChatNameSchema,
...aiConfig.modelSettings,
experimental_telemetry: { isEnabled: isLLMTelemetryEnabled() },
maxRetries: 2,
});
return response.text.trim();

return result.object;
}

export const updateChatName = async (
Expand Down
29 changes: 13 additions & 16 deletions packages/server/api/src/app/ai/chat/ai-mcp-chat.controller.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import {
UpdateChatModelRequest,
UpdateChatModelResponse,
} from '@openops/shared';
import { ModelMessage, UserModelMessage } from 'ai';
import { ModelMessage } from 'ai';
import { FastifyReply } from 'fastify';
import { StatusCodes } from 'http-status-codes';
import {
Expand Down Expand Up @@ -247,27 +247,24 @@ export const aiMCPChatController: FastifyPluginAsyncTypebox = async (app) => {
const { chatHistory } = await getConversation(chatId, userId, projectId);

if (chatHistory.length === 0) {
return await reply.code(200).send({ chatName: DEFAULT_CHAT_NAME });
return await reply
.code(200)
.send({ name: DEFAULT_CHAT_NAME, isGenerated: false });
}

const userMessages = chatHistory.filter(
(msg): msg is UserModelMessage =>
msg &&
typeof msg === 'object' &&
'role' in msg &&
msg.role === 'user',
);
const generated = await generateChatName(chatHistory, projectId);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filtering is removed, having only user messages for generation is not enough


if (userMessages.length === 0) {
return await reply.code(200).send({ chatName: DEFAULT_CHAT_NAME });
if (!generated.isGenerated) {
return await reply
.code(200)
.send({ name: DEFAULT_CHAT_NAME, isGenerated: false });
}

const rawChatName = await generateChatName(userMessages, projectId);
const chatName = rawChatName.trim() || DEFAULT_CHAT_NAME;

await updateChatName(chatId, userId, projectId, chatName);
if (generated.isGenerated && generated.name) {
await updateChatName(chatId, userId, projectId, generated.name);
}

return await reply.code(200).send({ chatName });
return await reply.code(200).send(generated);
} catch (error) {
return handleError(error, reply, 'generate chat name');
}
Expand Down
51 changes: 51 additions & 0 deletions packages/server/api/src/app/ai/chat/utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,57 @@ export function mergeToolResultsIntoMessages(
return uiMessages;
}

/**
* Sanitize chat history for secondary tasks like naming/summarization.
* - keeps only 'user' and 'assistant' roles
* - strips tool calls and non-text parts
* - merges multiple text parts into a single string with newlines
*/
export function sanitizeMessagesForChatName(
messages: ModelMessage[],
): ModelMessage[] {
const isSupportedRole = (m: ModelMessage) =>
m.role === 'user' || m.role === 'assistant';

const extractText = (content: ModelMessage['content']): string | null => {
if (typeof content === 'string') {
const text = content.trim();
return text ?? null;
}

if (Array.isArray(content)) {
const merged = (content as Array<unknown>)
.reduce<string[]>((acc, part) => {
if (
part &&
typeof part === 'object' &&
'type' in (part as Record<string, unknown>)
) {
const p = part as { type?: string; text?: string };
if (p.type === 'text' && typeof p.text === 'string') {
acc.push(p.text);
}
}
return acc;
}, [])
.join('\n')
.trim();

return merged ?? null;
}

return null;
};

return messages
.filter(isSupportedRole)
.map((m) => {
const text = extractText(m.content);
return text ? ({ role: m.role, content: text } as ModelMessage) : null;
})
.filter((m): m is ModelMessage => m !== null);
}

function isToolMessage(msg: ModelMessage): boolean {
return (
msg.role === 'tool' && Array.isArray(msg.content) && msg.content.length > 0
Expand Down
Loading
Loading