Skip to content

Fix GPT-5.x reasoning-item orphaning in history compaction + persist foundry resource name#307

Open
breedx wants to merge 3 commits intompfaffenberger:mainfrom
breedx:feature/foundry-5.5-fix
Open

Fix GPT-5.x reasoning-item orphaning in history compaction + persist foundry resource name#307
breedx wants to merge 3 commits intompfaffenberger:mainfrom
breedx:feature/foundry-5.5-fix

Conversation

@breedx
Copy link
Copy Markdown
Contributor

@breedx breedx commented Apr 27, 2026

Summary

Two related fixes for GPT-5.x reasoning models on the OpenAI Responses API and Azure AI Foundry, both surfaced while using foundry-gpt-5.5.

1. fix(compaction): preserve ThinkingParts carrying encrypted reasoning state

This is the important one — it affects any GPT-5 / o-series model on the Responses API (OpenAI, Azure Foundry OpenAI, ChatGPT OAuth), not just Foundry.

On the OpenAI Responses API, reasoning items arrive as ThinkingPart(content='', signature='<encrypted>', id='rs_...') — empty text, but the signature carries the encrypted reasoning state that must round-trip back so the paired msg_... item in the next turn is not orphaned.

_strip_empty_thinking_parts in code_puppy/agents/_compaction.py was dropping any ThinkingPart with empty content, including those carrying encrypted reasoning. On the next turn the API rejected with:

'msg_...' of type 'message' was provided without its required 'reasoning' item: 'rs_...'

Now we only strip when content is empty and there is no signature, id, or provider_details — anything carrying provider state survives compaction.

2. feat(azure_foundry): persist resource name across sessions

/foundry-setup re-prompted for the resource name every run unless the user exported ANTHROPIC_FOUNDRY_RESOURCE. The wizard now saves the chosen name to puppy.cfg under azure_foundry_resource, so the next run pre-populates it.

Resolution order is preserved for env-var users:
ANTHROPIC_FOUNDRY_RESOURCE (env) → puppy.cfgNone

(Happy to split this into a separate PR if you'd prefer to land #1 on its own.)

Test plan

  • uv run pytest tests/agents/test_compaction.py — all pass (includes new test_preserves_empty_thinking_with_signature)
  • uv run pytest tests/plugins/test_azure_foundry.py — all pass (includes new tests for cfg fallback, env precedence, and persistence)
  • uv run ruff format --check — clean
  • uv run ruff check — clean
  • Manual smoke test with foundry-gpt-5.5 — the original msg_.../rs_... error no longer fires on multi-turn conversations

breedx added 2 commits April 27, 2026 16:46
…state

_strip_empty_thinking_parts was dropping any ThinkingPart with empty
content, including the empty-content-but-signed reasoning items that
the OpenAI Responses API returns for GPT-5. Dropping those orphans the
paired msg_... TextPart on the next turn and the API rejects with:

  'msg_...' of type 'message' was provided without its required
  'reasoning' item: 'rs_...'

Now we only strip when content is empty AND there is no signature, id,
or provider_details — anything carrying provider state survives.
/foundry-setup previously prompted for the resource name every run
unless the user exported ANTHROPIC_FOUNDRY_RESOURCE. Save the chosen
name to puppy.cfg under `azure_foundry_resource` so the next run
pre-populates it.

Resolution order is preserved for env-var users:
  ANTHROPIC_FOUNDRY_RESOURCE (env) -> puppy.cfg -> None
@breedx
Copy link
Copy Markdown
Contributor Author

breedx commented Apr 27, 2026

Screenshot from 2026-04-27 12-36-27

CI uses unpinned 'pip install ruff' and picked up 0.15, which wraps
the negated isinstance lambda differently. Apply the format.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant