Cursor level of AI assistance for Sublime Text. I mean it.
Works with all OpenAI'ish API: llama.cpp server, ollama or whatever third party LLM hosting. Claude API support coming soon.
Note
5.0.0 release is around the corner! Check out release notes for details.
- Chat mode powered by whatever model you'd like.
- gpt-o3-mini and gpt-o1 support.
- llama.cpp's server, ollama and all the rest OpenAI'ish API compatible.
- Dedicated chats histories and assistant settings for a projects.
- Ability to send whole files or their parts as a context expanding.
- Phantoms Get non-disruptive inline right in view answers from the model.
- Markdown syntax with code languages syntax highlight (Chat mode only).
- Server Side Streaming (SSE) streaming support.
- Status bar various info: model name, mode, sent/received tokens.
- Proxy support.
- Sublime Text 4
- llama.cpp, ollama installed OR
- Remote llm service provider API key, e.g. OpenAI
- Atropic API key [coming soon].
- Install the Sublime Text Package Control plugin if you haven't done this before.
- Open the command palette and type
Package Control: Install Package
. - Type
OpenAI
and pressEnter
.
Note
Highly recommended complimentary packages:
ChatGPT mode works the following way:
- Select some text or even the whole tabs to include them in request
- Run either
OpenAI: Chat Model Select
orOpenAI: Chat Model Select With Tabs
commands. - Input a request in input window if any.
- The model will print a response in output panel by default, but you can switch that to a separate tab with
OpenAI: Open in Tab
. - To get an existing chat in a new window run
OpenAI: Refresh Chat
. - To reset history
OpenAI: Reset Chat History
command to rescue.
Note
You suggested to bind at least OpenAI: New Message
, OpenAI: Chat Model Select
and OpenAI: Show output panel
in sake for convenience, you can do that in plugin settings.
You can separate a chat history and assistant settings for a given project by appending the following snippet to its settings:
{
"settings": {
"ai_assistant": {
"cache_prefix": "/absolute/path/to/project/"
}
}
}
You can add a few things to your request:
- multi-line selection within a single file
- multiple files within a single View Group
To perform the former just select something within an active view and initiate the request this way without switching to another tab, selection would be added to a request as a preceding message (each selection chunk would be split by a new line).
To append the whole file(s) to request you should super+button1
on them to make whole tabs of them to become visible in a single view group and then run OpenAI: Add Sheets to Context
command. Sheets can be deselected with the same command.
You can check the numbers of added sheets in the status bar and on "OpenAI: Chat Model Select"
command call in the preview section.
Image handle can be called by OpenAI: Handle Image
command.
It expects an absolute path to image to be selected in a buffer or stored in clipboard on the command call (smth like /Users/username/Documents/Project/image.png
). In addition command can be passed by input panel to proceed the image with special treatment. png
and jpg
images are only supported.
Note
Currently plugin expects the link or the list of links separated by a new line to be selected in buffer or stored in clipboard only.
Phantom is the overlay UI placed inline in the editor view (see the picture below). It doesn't affects content of the view.
- [optional] Select some text to pass in context in to manipulate with.
- Pick
Phantom
as an output mode in quick panelOpenAI: Chat Model Select
. - You can apply actions to the llm prompt, they're quite self descriptive and follows behavior deprecated in buffer commands.
- You can hit
ctrl+c
to stop prompting same as with inpanel
mode.
- Replace
"url"
setting of a given model to point to whatever host you're server running on (e.g.http://localhost:8080/v1/chat/completions
). - Provide a
"token"
if your provider required one. - Tweak
"chat_model"
to a model of your choice and you're set.
Note
You can set both url
and token
either global or on per assistant instance basis, thus being capable to freely switching between closed source and open sourced models within a single session.
The OpenAI Completion plugin has a settings file where you can set your OpenAI API key. This is required for the most of providers to work. To set your API key, open the settings within Preferences
-> Package Settings
-> OpenAI
-> Settings
and paste your API key in the token property, as follows:
{
"token": "sk-your-token",
}
To disable advertisement you have to add "advertisement": false
line into an assistant setting where you wish it to be disabled.
You can bind keys for a given plugin command in Preferences
-> Package Settings
-> OpenAI
-> Key Bindings
. For example you can bind "New Message" including active tabs as context command like this:
{
"keys": [ "super+k", "super+'" ],
"command": "openai", // or "openai_panel"
"args": { "files_included": true }
},
You can setup it up by overriding the proxy property in the OpenAI completion
settings like follow:
"proxy": {
"address": "127.0.0.1", // required
"port": 9898, // required
"username": "account",
"password": "sOmEpAsSwOrD"
}
Warning
All selected code will be sent to the OpenAI servers (if not using custom API provider) for processing, so make sure you have all necessary permissions to do so.
Note
Dedicated to GPT3.5 that one the one who initially written at 80% of this back then. This was felt like a pure magic!