Skip to content

/create customizer: randomize tool loops on second 'shuffle' turn, card stuck at "Randomizing…" #5

@GeneralJerel

Description

@GeneralJerel

Summary

On the /create customizer route, the first shuffle / randomize request from the chat works fine. On the second user turn that asks for a randomize, the randomize tool card in the chat gets stuck at "Randomizing…" and the agent appears to loop (never returns a final message, stop button stays visible).

Repro

  1. Navigate to /create.
  2. In the theme assistant chat, send shuffle (or any prompt that triggers randomize). ✅ Works — the card transitions to "Randomized" and the agent replies with a bullet summary of what changed.
  3. Send shuffle again in the same thread. ❌ The card shows "Randomizing…" indefinitely; the agent never produces a text reply.

What's been tried

  • Tightened system prompt to require "call randomize EXACTLY ONCE per user request". gpt-5.4 still re-calls.
  • Added a 2-second time-window guard in useRandomizeCoAgent (apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.ts) — insufficient: gpt-5.4's per-call latency exceeds 2s in practice, so the window misses subsequent loop calls.
  • Replaced the time window with a turn-scoped flag that resets when useCopilotChat().isLoading transitions true → false (see commit 6936b94). The flag is set on the first successful call of the run; any re-call in the same run returns an "already done" tool result. Also wrapped the handler in try/catch so unhandled exceptions still return a tool result rather than leaving the card stuck.
  • User report: this fix did not resolve the issue — the second-turn loop / stuck card still reproduces.

Hypotheses to investigate

  1. The handler may never actually be invoked on the second turn (card stays at "Randomizing…" = status never leaves inProgress), which would mean the fix is moot. Add a console.log at the top of the handler to confirm whether it's even entered on the second turn.
  2. The tool-call event plumbing may be racing with the re-render that setParams triggers on the first call, dropping the tool result for the follow-up call. Worth checking whether useCopilotAction's registration is being invalidated between the call site and result delivery.
  3. The isLoading signal from useCopilotChat() may not be reliable here (multi-agent / LangGraph agent) — if it never goes false between turns, the flag never resets; if it flaps mid-run, the flag resets too early. Log its transitions to verify.
  4. Server-side (apps/agent/main.py, gpt-5.4 via LangGraph) may not be sending the follow-up tool result back to the client in a shape the frontend can consume. Check the AG-UI event stream for the second turn.

Next steps

  • Add instrumentation (handler entry log, isLoading transition log, tool-call event log) to determine which of the above is actually happening.
  • If it's (3), fall back to tracking user-message count via useCopilotChat().visibleMessages and reset the guard when the user-message count increments.
  • Consider enforcing "max 1 randomize per turn" on the server (LangGraph middleware) as a belt-and-suspenders fix, since client-side guards rely on cooperative LLM behavior.

References

  • Tool file: apps/ui/src/routes/create/agent-bridge/use-randomize-coagent.ts
  • Agent config: apps/agent/main.py
  • Tool card renderer: apps/ui/src/routes/create/agent-bridge/tool-call-card.tsx
  • Branch: shadify-port (commits 36011f6, 6936b94)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions