Repro
OpenAI-compatible upstreams occasionally return a 200 response whose body has no `choices` field at all (not just an empty array). Observed on `google/gemma-4-31b-it` via OpenRouter under concurrent load — ~25% of workers in a 16-worker batch hit this.
What happens
`fromOpenAICompletion` (and the streaming chunk handler) accesses `completion.choices[0]` before checking that `choices` exists:
```ts
// src/llm/openai-common.ts:189
export function fromOpenAICompletion(completion: ChatCompletion, knownToolNames?: string[]): LLMResponse {
const choice = completion.choices[0] // throws TypeError when choices is undefined
if (choice === undefined) {
throw new Error('OpenAI returned a completion with no choices')
}
...
}
```
The intended `new Error('OpenAI returned a completion with no choices')` never fires; instead a raw `TypeError: Cannot read properties of undefined (reading '0')` is thrown. `Agent.run()`'s catch swallows the TypeError into `result.output`, and downstream callers see `success: false, output: "Cannot read properties of undefined (reading '0')"` with no context.
The same shape is in the streaming path (`src/llm/openai.ts:197`, `src/llm/azure-openai.ts:205`, `src/llm/copilot.ts:367`).
Minimal probe
Calling `fromOpenAICompletion({ id: '1', model: 'g', usage: { prompt_tokens: 1, completion_tokens: 1 } })` reproduces it (no `choices` key).
Suggested fix
```ts
const choice = completion.choices?.[0]
if (choice === undefined) {
throw new Error('OpenAI returned a completion with no choices')
}
```
…and the analogous `?.` on the streaming chunk handlers.
Environment
- `@jackchen_me/open-multi-agent` v1.3.1
- model `google/gemma-4-31b-it` via OpenRouter (`provider: 'openai'` + `baseURL: 'https://openrouter.ai/api/v1'\`)
- `provider.quantizations: ['bf16', 'fp16']` pinned (verified the response was missing `choices` after this filter, not because of fp4 corruption)
- non-streaming chat path (`stream: false`)
- 4/16 workers hit this in a single 16-worker batch run
Happy to PR the `?.` fix if useful.
Repro
OpenAI-compatible upstreams occasionally return a 200 response whose body has no `choices` field at all (not just an empty array). Observed on `google/gemma-4-31b-it` via OpenRouter under concurrent load — ~25% of workers in a 16-worker batch hit this.
What happens
`fromOpenAICompletion` (and the streaming chunk handler) accesses `completion.choices[0]` before checking that `choices` exists:
```ts
// src/llm/openai-common.ts:189
export function fromOpenAICompletion(completion: ChatCompletion, knownToolNames?: string[]): LLMResponse {
const choice = completion.choices[0] // throws TypeError when choices is undefined
if (choice === undefined) {
throw new Error('OpenAI returned a completion with no choices')
}
...
}
```
The intended `new Error('OpenAI returned a completion with no choices')` never fires; instead a raw `TypeError: Cannot read properties of undefined (reading '0')` is thrown. `Agent.run()`'s catch swallows the TypeError into `result.output`, and downstream callers see `success: false, output: "Cannot read properties of undefined (reading '0')"` with no context.
The same shape is in the streaming path (`src/llm/openai.ts:197`, `src/llm/azure-openai.ts:205`, `src/llm/copilot.ts:367`).
Minimal probe
Calling `fromOpenAICompletion({ id: '1', model: 'g', usage: { prompt_tokens: 1, completion_tokens: 1 } })` reproduces it (no `choices` key).
Suggested fix
```ts
const choice = completion.choices?.[0]
if (choice === undefined) {
throw new Error('OpenAI returned a completion with no choices')
}
```
…and the analogous `?.` on the streaming chunk handlers.
Environment
Happy to PR the `?.` fix if useful.