Using llama.cpp not working #8601
-
|
My config looks like this: Like described in https://docs.continue.dev/customize/model-providers/more/llamacpp . No request is made to the server. Anyone got an I idea how to fix this? |
Beta Was this translation helpful? Give feedback.
Answered by
rtpt-romankarwacik
Nov 6, 2025
Replies: 1 comment
-
|
Ok using |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
rtpt-romankarwacik
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Ok using
provider: openaiinstead solved this. This should probably be updated in the documentation.