Summary
On the /create customizer route, the first shuffle / randomize request from the chat works fine. On the second user turn that asks for a randomize, the randomize tool card in the chat gets stuck at "Randomizing…" and the agent appears to loop (never returns a final message, stop button stays visible).
Repro
- Navigate to
/create.
- In the theme assistant chat, send
shuffle (or any prompt that triggers randomize). ✅ Works — the card transitions to "Randomized" and the agent replies with a bullet summary of what changed.
- Send
shuffle again in the same thread. ❌ The card shows "Randomizing…" indefinitely; the agent never produces a text reply.
What's been tried
- Tightened system prompt to require "call
randomize EXACTLY ONCE per user request". gpt-5.4 still re-calls.
- Added a 2-second time-window guard in
useRandomizeCoAgent (apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.ts) — insufficient: gpt-5.4's per-call latency exceeds 2s in practice, so the window misses subsequent loop calls.
- Replaced the time window with a turn-scoped flag that resets when
useCopilotChat().isLoading transitions true → false (see commit 6936b94). The flag is set on the first successful call of the run; any re-call in the same run returns an "already done" tool result. Also wrapped the handler in try/catch so unhandled exceptions still return a tool result rather than leaving the card stuck.
- User report: this fix did not resolve the issue — the second-turn loop / stuck card still reproduces.
Hypotheses to investigate
- The handler may never actually be invoked on the second turn (card stays at "Randomizing…" = status never leaves
inProgress), which would mean the fix is moot. Add a console.log at the top of the handler to confirm whether it's even entered on the second turn.
- The tool-call event plumbing may be racing with the re-render that
setParams triggers on the first call, dropping the tool result for the follow-up call. Worth checking whether useCopilotAction's registration is being invalidated between the call site and result delivery.
- The
isLoading signal from useCopilotChat() may not be reliable here (multi-agent / LangGraph agent) — if it never goes false between turns, the flag never resets; if it flaps mid-run, the flag resets too early. Log its transitions to verify.
- Server-side (
apps/agent/main.py, gpt-5.4 via LangGraph) may not be sending the follow-up tool result back to the client in a shape the frontend can consume. Check the AG-UI event stream for the second turn.
Next steps
- Add instrumentation (handler entry log,
isLoading transition log, tool-call event log) to determine which of the above is actually happening.
- If it's (3), fall back to tracking user-message count via
useCopilotChat().visibleMessages and reset the guard when the user-message count increments.
- Consider enforcing "max 1 randomize per turn" on the server (LangGraph middleware) as a belt-and-suspenders fix, since client-side guards rely on cooperative LLM behavior.
References
- Tool file:
apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.ts
- Agent config:
apps/agent/main.py
- Tool card renderer:
apps/ui/src/routes/create/agent-bridge/tool-call-card.tsx
- Branch:
shadify-port (commits 36011f6, 6936b94)
Summary
On the
/createcustomizer route, the firstshuffle/ randomize request from the chat works fine. On the second user turn that asks for a randomize, therandomizetool card in the chat gets stuck at "Randomizing…" and the agent appears to loop (never returns a final message, stop button stays visible).Repro
/create.shuffle(or any prompt that triggersrandomize). ✅ Works — the card transitions to "Randomized" and the agent replies with a bullet summary of what changed.shuffleagain in the same thread. ❌ The card shows "Randomizing…" indefinitely; the agent never produces a text reply.What's been tried
randomizeEXACTLY ONCE per user request". gpt-5.4 still re-calls.useRandomizeCoAgent(apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.ts) — insufficient: gpt-5.4's per-call latency exceeds 2s in practice, so the window misses subsequent loop calls.useCopilotChat().isLoadingtransitionstrue → false(see commit6936b94). The flag is set on the first successful call of the run; any re-call in the same run returns an "already done" tool result. Also wrapped the handler intry/catchso unhandled exceptions still return a tool result rather than leaving the card stuck.Hypotheses to investigate
inProgress), which would mean the fix is moot. Add aconsole.logat the top of the handler to confirm whether it's even entered on the second turn.setParamstriggers on the first call, dropping the tool result for the follow-up call. Worth checking whetheruseCopilotAction's registration is being invalidated between the call site and result delivery.isLoadingsignal fromuseCopilotChat()may not be reliable here (multi-agent / LangGraph agent) — if it never goesfalsebetween turns, the flag never resets; if it flaps mid-run, the flag resets too early. Log its transitions to verify.apps/agent/main.py, gpt-5.4 via LangGraph) may not be sending the follow-up tool result back to the client in a shape the frontend can consume. Check the AG-UI event stream for the second turn.Next steps
isLoadingtransition log, tool-call event log) to determine which of the above is actually happening.useCopilotChat().visibleMessagesand reset the guard when the user-message count increments.References
apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.tsapps/agent/main.pyapps/ui/src/routes/create/agent-bridge/tool-call-card.tsxshadify-port(commits36011f6,6936b94)