Replies: 1 comment
-
|
seems to be fixed, but not sure?
still have these issues with various tool capable models |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using the Continue extension with a locally served Ollama model (qwen3:14b) for code assistance.
Ever since the most recent update, I've been experiencing a issue: when I accept a code suggestion from the assistant (e.g., remove all comments from code), the LLM's "think" process text is being injected directly into my editor along with the code.
In some cases, this action also causes existing, unrelated code to be deleted.
It seems like the extension might be incorrectly parsing the model's full stream output, including metadata or the chain-of-thought, instead of just the final code block.
Has anyone else encountered this behavior?
Any insights on what might be causing this or a potential fix would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions