@@ -32,7 +32,7 @@ To enable CodeGate, enable **Use custom base URL** and enter
32
32
33
33
You need an [ OpenAI API] ( https://openai.com/api/ ) account to use this provider.
34
34
To use a different OpenAI-compatible endpoint, set the ` CODEGATE_OPENAI_URL `
35
- [ configuration parameter] ( ../how-to/configure.md ) .
35
+ [ configuration parameter] ( ../how-to/configure.md ) when you launch CodeGate .
36
36
37
37
In the Cline settings, choose ** OpenAI Compatible** as your provider, enter your
38
38
OpenAI API key, and set your preferred model (example: ` gpt-4o-mini ` ).
@@ -80,9 +80,14 @@ locally using `ollama pull`.
80
80
<TabItem value = " lmstudio" label = " LM Studio" >
81
81
82
82
You need LM Studio installed on your local system with a server running from LM
83
- Studio's Developer tab to use this provider. See the
83
+ Studio's ** Developer** tab to use this provider. See the
84
84
[ LM Studio docs] ( https://lmstudio.ai/docs/api/server ) for more information.
85
85
86
+ Cline uses large prompts, so you will likely need to increase the context length
87
+ for the model you've loaded in LM Studio. In the Developer tab, select the model
88
+ you'll use with CodeGate, open the ** Load** tab on the right and increase the
89
+ ** Context Length** to _ at least_ 18k (18,432) tokens, then reload the model.
90
+
86
91
<ThemedImage
87
92
alt = ' LM Studio dev server'
88
93
sources = { {
@@ -96,7 +101,8 @@ In the Cline settings, choose LM Studio as your provider and set the **Base
96
101
URL** to ` http://localhost:8989/openai ` .
97
102
98
103
Set the ** Model ID** to ` lm_studio/<MODEL_NAME> ` , where ` <MODEL_NAME> ` is the
99
- name of the model you're serving through LM Studio (shown in the Developer tab).
104
+ name of the model you're serving through LM Studio (shown in the Developer tab),
105
+ for example ` lm_studio/qwen2.5-coder-7b-instruct ` .
100
106
101
107
<LocalModelRecommendation />
102
108
0 commit comments