Cannot setup an Azure OpenAI model #6998
-
| Before submitting your bug report
 Relevant environment info- OS:Windows
- Continue:last
- IDE: VSCode
- Model: gpt-4o
- config.json:DescriptionHow to setup an Azure Open AI model? What is Engine? Where I can grab this value in Azure portal/AI Studio? What is API Base? Where I can grab this value in Azure portal/AI studio? Please create a documentation not only what must be inserted into config file but WHERE I can find the required value. Thanks To reproduce
 Log outputNo response | 
Beta Was this translation helpful? Give feedback.
Replies: 9 comments
-
| I agree that the documentation for the Azure OpenAI service could be clearer. Here's how I configured mine: Configuration in config.json{
  "models": [{
    "title": "Azure OpenAI",
    "provider": "azure",
    "model": "<model>",
    "engine": "<deployment_id>",
    "apiBase": "<azure_open_ai_endpoint>",
    "apiType": "azure",
    "apiKey": "<azure_open_ai_key>"
  }...Explanation of Configuration Parameters
 | 
Beta Was this translation helpful? Give feedback.
-
| I agree with @geekloper. Thats also how i configured it. However, i looked up the current ModelDescription-Schema but i think the "engine", "apiVersion" and "apiType" may missing. If i follow the model description, i am not able to come up with a working azure models. See: Lines 714 to 726 in 0ef443c | 
Beta Was this translation helpful? Give feedback.
-
| Thanks @geekloper. BTW: Setting up TabAutocomplettion to llama3.1 running on ollama local I have a nice, smart and cheap Copilot :-) | 
Beta Was this translation helpful? Give feedback.
-
| Is it possible to use a self-hosted LLM? Let's say I have a GPT-4o hosted in my own cloud service (Azure)? | 
Beta Was this translation helpful? Give feedback.
-
| 
 Yes. Until today's stable release, we ran it with our API base in  Update:That was fixed within a couple of weeks so that the config correctly used  | 
Beta Was this translation helpful? Give feedback.
-
| Additional quirk: If you configure it with anything in the "model" field, like gpt-3-turbo, gpt4, gpt4o, gpt-4, gpt-4o, o1-mini - even if your deployment only has a gpt-3 model, it processes the request fine. If you configure it with nothing in the "model" or leave out the model field. Then it just hangs when attempting to get a response. So I think the 'model' field is for internal use to define the capabilities rather than a requirement for the web query. | 
Beta Was this translation helpful? Give feedback.
-
| I'm trying to setup AzureOpenAI o3-mini with that same payload, works just fine with gpt4 but not with o3-mini. Any comment on how to fix this ? | 
Beta Was this translation helpful? Give feedback.
-
| To make AzureOpenAI o3-mini work I guess someone needs to update  | 
Beta Was this translation helpful? Give feedback.
-
| @d4nielmeyer thanks for linking that PR, I believe that is what's needed to get o3 working for Azure. I'm not sure how soon we'll be able to prioritize this so if anyone has interest in opening up a PR that would be much appreciated! | 
Beta Was this translation helpful? Give feedback.
@d4nielmeyer thanks for linking that PR, I believe that is what's needed to get o3 working for Azure. I'm not sure how soon we'll be able to prioritize this so if anyone has interest in opening up a PR that would be much appreciated!