Skip to content

security: call block_dangerous_modules during engine load#123

Open
antojoseph wants to merge 2 commits into
masterfrom
security/f005-block-dangerous-modules
Open

security: call block_dangerous_modules during engine load#123
antojoseph wants to merge 2 commits into
masterfrom
security/f005-block-dangerous-modules

Conversation

@antojoseph
Copy link
Copy Markdown

Summary

  • block_dangerous_modules() was defined in provider/src/inference.rs and covered by tests, but was never called in the production load() path
  • socket, subprocess, ctypes, multiprocessing, and faulthandler were importable by provider-controlled vllm_mlx code at inference time despite dangerous_modules_blocked: true being reported to the coordinator
  • Fix: call both lock_python_path and block_dangerous_modules at the top of load_vllm_mlx, in the same GIL scope that runs the model load

Impact

A malicious provider serving private text could import socket or subprocess to exfiltrate prompts or establish outbound connections from inside the hardened process.

Test plan

  • Existing test_block_dangerous_modules_blocks_imports still passes
  • cargo test --features python passes
  • Verify a provider can load a model successfully (no regression in happy path)

block_dangerous_modules was defined but never called in the production
load path — only in tests. socket, subprocess, ctypes and multiprocessing
were importable by provider-controlled vllm_mlx code despite the reported
dangerous_modules_blocked capability being true.

Call both lock_python_path and block_dangerous_modules at the top of
load_vllm_mlx so both security layers are active in the same GIL scope
that runs the model load.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 4, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
d-inference Ready Ready Preview May 4, 2026 3:41am
d-inference-console-ui-dev Ready Ready Preview May 4, 2026 3:41am
d-inference-landing Ready Ready Preview May 4, 2026 3:41am

Request Review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7b4facbcf8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread provider/src/inference.rs Outdated
// here is safe (idempotent) and ensures the blocker is installed in the
// same interpreter state that will execute the model load.
Self::lock_python_path(py)?;
Self::block_dangerous_modules(py)?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Delay dangerous-module blocking until model is loaded

Calling block_dangerous_modules() before _load_model() can break cold-start model loads when the model is not already cached: this function explicitly documents that load() may download weights if needed, and the new blocker removes/blocks socket and subprocess imports that Python download stacks rely on. In that scenario, load_vllm_mlx will fail before the engine initializes, so first-run providers cannot come up unless models are pre-downloaded.

Useful? React with 👍 / 👎.

_load_model may download weights from HuggingFace on cold starts, which
requires socket and urllib. Blocking those modules before load caused
first-run failures when the model was not pre-cached.

sys.path is still locked before the load; the dangerous-module blocker
is now installed after the engine is initialized and before inference
requests are served.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant