Skip to content

Conversation

@aibozo
Copy link

@aibozo aibozo commented Nov 6, 2025

Summary:

  • Adds environments/mcp_fetch/, an MCPEnv-based environment exposing the fetch MCP tool.
  • Default mode auto-starts a deterministic local fixture server covering redirects, auth, headers, query params, gzipped/
    large payloads, hash/truncation checks, pointer/workflow chains, and rubric-graded summaries.
  • Agents must reply with ANSWER: <value>; host allow-list, timeouts, and max-bytes caps enforce safety.

Design:

  • FetchEnv(MCPEnv) wires load_environment(...), dataset builder, rubric, and simulator lifecycle management.
  • fetch tool schema: inputs (url, method, headers, params, timeout_s, max_bytes); outputs (status, headers,
    body_text/body_json, hash, final_url, content_type, bytes, truncated, etc.). Host allow-list defaults to
    127.0.0.1:<fixture_port> with allow_online overrides.
  • Verifiers handle exact-match, JSON paths, header/status checks, hash digit sums, char counts, and JudgeRubric delegation.

Data:

  • tasks/qa.jsonl has 84 tasks (≥20 enforced) spanning planner/workflow combos, pointer lookups, ledger math, header
    challenges, query summaries, truncation/hash probes, etc.
  • Offline fixtures (fixtures/html|json|text/*) plus helper scripts keep runs deterministic.

Evaluation:

uv run pytest tests/environments/test_mcp_fetch.py -q
uv run ruff check environments/mcp_fetch

  • Deterministic rubric defaults to normalized exact-match accuracy; JudgeRubric handles summary tasks via tasks/ judge_rubrics.yaml.

Difficulty calibration:

Model Correct/Total Accuracy
gpt-4.1-mini 46 / 84 54.8%
gpt-5 68 / 84 81.00%

Notes:

  • README documents offline vs Hub commands, PYTHONPATH requirement, and calibration table.
  • .gitignore blocks nested .venv/ directories; committed envs were removed.

Run gh pr create --title "MCP: Fetch — deterministic MCP env with single fetch tool" --body-file --base
main --head aibozo:main --repo PrimeIntellect-ai/prime-environments once authenticated, or paste the text into the compare
page noted above.

@CLAassistant
Copy link

CLAassistant commented Nov 6, 2025

CLA assistant check
All committers have signed the CLA.

@aibozo
Copy link
Author

aibozo commented Nov 9, 2025

I screwed up the commit format, sorry. if you want me to close and create another pull request with the proper format i can, but all of the appropriate things have been done. The new and old test suite pass, I reviewed the code (i can remove the extra bits but they seemed helpful). Let me know what you want me to do formatting issue wont happen again, I'm new at this.

ronaldnetawat pushed a commit to ronaldnetawat/verifiers that referenced this pull request Nov 13, 2025
* verifiers integration v0.0

* verifiers integration v0.0

* training finishes on reversetext

* reverse_text trains 0.13 -> 0.70

* pin git branch in pyproject

* removed ac in config

* remove DataConfig

* removed print debug

* configurable masks

* rm dataconfig instance

* pinned vf commit, removed extra deps

* registry cleanup, reworked gsm8k, removed default env

* bumped verifiers commit, fixed training divergence vs refactor branch

* vf simple-math

* simple-math edits

* debug

* simple_math train matching reference

* rename math tasks, port to verifiers

* Update configs

* Fix imports

* Add back envs that were accidentically deleted during rebase

* Fix sampling.n missing and unused vars

* Remove redundant log

* Update README with new task names

* Fix hendrycks math config path in README

* Update W&B project names

* Fix wrong config key

* Add missing import

* Fix sampling.n not defined

* Fix missing config key

* Do not tokenize in eval

* Fix typos

* Add 1B and 7B hendryck's and intellect math

* Remove comment

* fix tests and configs (PrimeIntellect-ai#548)

* add tests orch

* fix configs

* fix configs

* fix tests

* fix pydantic ofnig

* fix tetstp

* Dispatch subconfigs via tmp toml file

* Add correct GPU placement for int math run

* More consistent var names

* Set project and model also in orchestrator

* Update readme (PrimeIntellect-ai#550)

* fix readme

* fix scripts

* Parse single-turn prompt and completions tokens/ logprobs from vLLM directly via mock process_env_results function

* Update verifiers rev

* Fix style

* Do not filter if field is missing

---------

Co-authored-by: Mika Senghaas <[email protected]>
Co-authored-by: samsja <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants