Tool use does not work with OpenAI compatible endpoint running Qwen3-coder #8557
-
|
Hi, I have a self hosted OpenAI compatible server (tabbyAPI) that runs Qwen3-coder. I set the Unfortunately tool call does not work with this. I suspect the cause is that the tool call template of OpenAI differs from Qwen's. Is there a way to tell Continue that my endpoint is running a specific supported model? (as written in the docs) Tool call works with the real OpenAI endpoint and Anthropic.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
The IssueThe issue you're experiencing with Qwen3-Coder:30B Instruct and tool calling through a self-hosted OpenAI-compatible endpoint is related to how Continue determines which tool-calling format to use. How Continue Handles Tool CallingAccording to the Agent Mode Model Setup documentation, Continue uses two approaches for tool calling:
Continue automatically detects which approach to use based on the model's capabilities. However, when you configure The ProblemQwen3-Coder:30B Instruct uses a different tool-calling template than OpenAI. When Continue sends requests in OpenAI's format to your endpoint, Qwen3-Coder:30B Instruct doesn't understand the structure, causing tool calls to fail. Recommended SolutionYou should configure your model with models:
- name: Qwen3-Coder
provider: openai
model: Qwen3-Coder:30B Instruct
apiBase: http://your-tabbyapi-endpoint
apiKey: your-key
capabilities: [] # Explicitly remove tool_use capabilityOr alternatively, don't specify any capabilities at all and Continue should fall back to system message tools: models:
- name: Qwen3-Coder
provider: openai
model: Qwen3-Coder:30B Instruct
apiBase: http://your-tabbyapi-endpoint
apiKey: your-key
# No capabilities specified - Continue will use system message toolsThis configuration tells Continue to use the XML-based system message tools approach, which should work with Qwen3-Coder:30B Instruct since it can follow instructions to generate structured XML output. Alternative ApproachIf the above doesn't work, you might also try setting the provider to Note on ConfigurationThe valid model capabilities in Continue are:
For more details, see the Model Capabilities documentation. Would you be able to test this configuration and report back whether it resolves the tool-calling issues? |
Beta Was this translation helpful? Give feedback.

The Issue
The issue you're experiencing with Qwen3-Coder:30B Instruct and tool calling through a self-hosted OpenAI-compatible endpoint is related to how Continue determines which tool-calling format to use.
How Continue Handles Tool Calling
According to the Agent Mode Model Setup documentation, Continue uses two approaches for tool calling:
Continue automatically detects which approach to use based on the model's capabilities. However, when you configure
provider: openaifor your tabbyAPI endpoint, Continue assumes it's communicating with an actual OpenA…