Skip to content

v0.12.92 — Fix multi-turn chat for reasoning models (continue.dev #135)

Latest

Choose a tag to compare

@1bcMax 1bcMax released this 31 Mar 21:33
· 29 commits to main since this release
0da922b

Bug Fix: Existing chat always fails in continue.dev (#135)

Root cause

moonshot/kimi-k2.5 (primary MEDIUM-tier model in blockrun/auto) is a reasoning model that requires reasoning_content on all assistant messages in multi-turn history — not just tool-call messages. When continue.dev sent an existing chat, the plain-text assistant message from the previous turn was missing reasoning_content, causing a 400 from the model.

Since that 400 didn't match any PROVIDER_ERROR_PATTERNS, isProviderError=false and the fallback loop broke on the first attempt. All models failed → SSE error sent → OpenAI SDK in continue.dev threw "Unexpected error".

New chats (no assistant history) were unaffected — only existing chats broke.

Fixes

  • normalizeMessagesForThinking — now adds reasoning_content: "" to all assistant messages (not just tool-call ones) when targeting a reasoning model
  • SSE error format — error events now always use {"error":{...}} OpenAI wrapper; raw upstream JSON was previously forwarded as-is, hiding the real error message
  • PROVIDER_ERROR_PATTERNS — added reasoning_content.*missing as a safety net for proper fallback

Verification

  • E2E: 3-turn SSE streaming test passed (turn 2 was the broken case)
  • Unit: 7 new regression tests for normalizeMessagesForThinking
  • Full suite: 364/364 passing

Update

npx @blockrun/clawrouter@latest