Skip to content

API proxy does not forward to custom OPENAI_BASE_URL / ANTHROPIC_BASE_URL endpoints (e.g., internal LLM routers) #20590

@Rubyj

Description

@Rubyj

Description

When using the AWF sandbox with --enable-api-proxy, the API proxy overrides OPENAI_BASE_URL inside the agent container to point to its internal proxy (http://172.30.0.30:10000/v1). The proxy then forwards all requests to the hardcoded api.openai.com (or api.anthropic.com), ignoring any custom OPENAI_BASE_URL or ANTHROPIC_BASE_URL set by the user.

This makes it impossible to use AWF with internal/self-hosted LLM endpoints, such as corporate LLM routers or proxies that provide an OpenAI-compatible API.

Reproduction

  1. Configure a workflow with a custom OPENAI_BASE_URL pointing to an internal endpoint:
engine:
  id: codex
  model: gpt-5.3-codex
  env:
    OPENAI_BASE_URL: "https://llm-router.internal.example.com/v1"
    OPENAI_API_KEY: ${{ secrets.LLM_ROUTER_KEY }}
  1. Compile and run the workflow. The compiled lock file includes --enable-api-proxy automatically.

  2. The AWF API proxy:

    • Overrides OPENAI_BASE_URL to http://172.30.0.30:10000/v1 inside the agent container
    • Strips OPENAI_API_KEY from the agent container (credential isolation)
    • Forwards all requests to api.openai.com instead of the custom endpoint
  3. Result: 401 Unauthorized because the internal API key is sent to api.openai.com, which doesn't recognize it.

Evidence from logs

[health-check] ✓ OpenAI/Codex credentials NOT in agent environment (correct)
[health-check] Testing connectivity to OpenAI API proxy at http://172.30.0.30:10000/v1...
...
Request completed method=POST url=http://172.30.0.30:10000/v1/responses status=401 Unauthorized
headers={... "Domain=api.openai.com" ...}
Incorrect API key provided: sk-****

The Domain=api.openai.com in the response cookies confirms the proxy is forwarding to OpenAI's API, not our internal endpoint.

Expected behavior

The API proxy should respect the user-configured OPENAI_BASE_URL / ANTHROPIC_BASE_URL and forward requests to that endpoint instead of the hardcoded defaults. This would allow AWF's credential isolation and firewall features to work with:

  • Corporate/internal LLM routers
  • Azure OpenAI endpoints
  • Self-hosted OpenAI-compatible APIs (e.g., vLLM, TGI, etc.)

Suggested implementation

A --openai-api-target <url> flag (similar to the existing --copilot-api-target <host> flag) that configures the API proxy's upstream for OpenAI requests. Similarly, --anthropic-api-target <url> for Anthropic.

The gh-aw compiler could automatically set these flags when it detects a custom OPENAI_BASE_URL or ANTHROPIC_BASE_URL in engine.env.

Current workaround

The only workaround is to disable the AWF sandbox entirely (sandbox.agent: false + strict: false + threat-detection: false), which removes all firewall protection and credential isolation. This is not ideal for production use.

Documentation reference

The API proxy limitations are documented at docs/api-proxy-sidecar.md:

  • Only supports OpenAI and Anthropic APIs
  • No support for Azure OpenAI endpoints

Environment

  • gh-aw CLI: v0.53.6
  • AWF: v0.23.0
  • Engine: codex (OpenAI Codex CLI)
  • Runner: Self-hosted (Amazon Linux 2023)

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions