Bug Fix: Existing chat always fails in continue.dev (#135)
Root cause
moonshot/kimi-k2.5 (primary MEDIUM-tier model in blockrun/auto) is a reasoning model that requires reasoning_content on all assistant messages in multi-turn history — not just tool-call messages. When continue.dev sent an existing chat, the plain-text assistant message from the previous turn was missing reasoning_content, causing a 400 from the model.
Since that 400 didn't match any PROVIDER_ERROR_PATTERNS, isProviderError=false and the fallback loop broke on the first attempt. All models failed → SSE error sent → OpenAI SDK in continue.dev threw "Unexpected error".
New chats (no assistant history) were unaffected — only existing chats broke.
Fixes
normalizeMessagesForThinking— now addsreasoning_content: ""to all assistant messages (not just tool-call ones) when targeting a reasoning model- SSE error format — error events now always use
{"error":{...}}OpenAI wrapper; raw upstream JSON was previously forwarded as-is, hiding the real error message PROVIDER_ERROR_PATTERNS— addedreasoning_content.*missingas a safety net for proper fallback
Verification
- E2E: 3-turn SSE streaming test passed (turn 2 was the broken case)
- Unit: 7 new regression tests for
normalizeMessagesForThinking - Full suite: 364/364 passing
Update
npx @blockrun/clawrouter@latest