feat: Load dynamic model list from LM Studio#9558
feat: Load dynamic model list from LM Studio#9558ricardofiorani wants to merge 7 commits intoKilo-Org:mainfrom
Conversation
sync to main
sync to main
| } | ||
|
|
||
| try { | ||
| const lmstudioModels = await ModelCache.fetch("lmstudio", lmstudioFetchOptions).catch(() => ({})) |
There was a problem hiding this comment.
Should we log errors?
| if (Object.keys(apertisModels).length === 0) { | ||
| ModelCache.refresh("apertis", apertisFetchOptions).catch(() => {}) | ||
| } | ||
|
|
There was a problem hiding this comment.
We can't accept those changes without kilocode change markers and a changelog
|
|
||
| // LM Studio dynamic model discovery | ||
| if (lmstudioAllowed) { | ||
| const lmstudioConfig = config.provider?.lmstudio?.options |
There was a problem hiding this comment.
The logic looks pretty duplicated to below, please generalize if possible
|
|
||
| return models | ||
| async function fetchApertisModels(options: any): Promise<Record<string, any>> { | ||
| return fetchOpenAICompatibleModels({ |
There was a problem hiding this comment.
WARNING: Refactor removes the Apertis API-key guard
Before this helper was generalized, fetchApertisModels() returned early when apiKey was missing. After this change we still call https://api.apertis.ai/v1/models without auth, and models.ts immediately schedules a second refresh() because the cached result is empty. That turns an unconfigured provider into repeated failing network traffic on every startup.
| // LM Studio dynamic model discovery | ||
| if (lmstudioAllowed) { | ||
| const lmstudioConfig = config.provider?.lmstudio?.options | ||
| const lmstudioBaseURL = lmstudioConfig?.baseURL ?? "http://127.0.0.1:1234/v1" |
There was a problem hiding this comment.
WARNING: LMSTUDIO_BASE_URL can fetch models from one endpoint and send requests to another
ModelCache.fetch("lmstudio", ...) resolves baseURL from config, auth, and LMSTUDIO_BASE_URL, but lmstudioBaseURL here only reads config.provider?.lmstudio?.options. If a user sets only LMSTUDIO_BASE_URL, discovery will hit that URL while provider.api still stays on http://127.0.0.1:1234/v1, so the selected model can be listed from one server and invoked against another.
Code Review SummaryStatus: 2 Issues Found | Recommendation: Address before merge Overview
Issue Details (click to expand)WARNING
Other Observations (not in diff)Issues found in unchanged code that cannot receive inline comments:
Files Reviewed (3 files)
Fix these issues in Kilo Cloud Reviewed by gpt-5.4-2026-03-05 · 758,991 tokens |
|
@marius-kilocode thank you for the review! (I'm also sorry it's a bit sloppy, in the rush to use this myself I completely disregarded coding styles and everything else in the rulebook) I'm AFK due to sickness this week, so I'll update my PR prob next week! (Sorry about that) Best regards, |
Context
LM Studio appeared as a provider in the models list but only showed static placeholder models from a snapshot instead of the user's actual local models. Users expected the model picker to reflect what was loaded in their local LM Studio instance.
Implementation
Added dynamic model discovery for LM Studio by:
New
fetchOpenAICompatibleModels()helper inmodel-cache.tsthat queries any OpenAI-compatible/modelsendpoint. UsesBun.fetch(globalfetchwas unreliable in the bundled binary context). Returns aRecord<string, Model>with sensible defaults.Refactored
fetchApertisModels()to reuse the new helper, reducing code duplication.Added
lmstudiocase to thefetchModels()dispatcher and auth resolution ingetAuthOptions()supporting config, auth, and env sources (LMSTUDIO_API_KEY,LMSTUDIO_BASE_URL).Dynamic injection in
ModelsDev.get()— when LM Studio is allowed byenabled_providers/disabled_providersconfig, attempts to fetch models fromhttp://127.0.0.1:1234/v1(or custombaseURL). On success, replaces snapshot models; on failure, gracefully falls back to snapshot.Also fixed a pre-existing typecheck failure in
packages/app/src/custom-elements.d.ts(broken symlink → reference directive).Screenshots
openai/gpt-oss-20b,qwen/qwen3-30b-a3b-2507,qwen/qwen3-coder-30b)How to Test
bun run --cwd packages/opencode build.\packages\opencode\dist\@kilocode\cli-windows-x64\bin\kilo.exe models lmstudiobaseURLin config: setprovider.lmstudio.options.baseURLin.kilocode/config.jsonGet in Touch
@ricardofiorani on the Kilo Code Discord