Sublime Text first class AI assistance plugin

Hey folks, I’d like to introduce the thing I’ve been working on for the last two years. As you might have guessed, it’s a Sublime Text AI assistance plugin, which is to be fair not that hard to achieve, and I claim that it’s not just the best-made and feature-rich AI assistant across the ST ecosystem, but it’s almost on par with Zed’s or even Cursor’s capabilities, all with zero funding so far (I’m aware that’s a disadvantage for now).

So, thanks to the o1 release, we now know that at least some people at OpenAI have good taste because they’re using Sublime Text in their work. This was the final push that made me decide to promote my work. It’s not just me, but 4.6k installs to date that think it’s now an indispensable feature, and I think it’d be a real shame if the folks actually developing OpenAI in Sublime Text weren’t aware of the OpenAI Completion plugin for Sublime Text.

So let me briefly introduce an overview of the most useful features of the plugin (you can read more details in the readme which I can’t attach post here yet):

  • The first goal I’ve tried to achieve with this plugin is to make the AI assistant as deeply and seamlessly integrated into ST as possible. The philosophy behind this is: “If two great tools already exist, just integrate them well rather than reinventing the wheel.” Here’s what that means:

    • The chat with the assistant benefits from everything ST provides:
      • Full-text search, symbol list navigation, first-class markdown with support for injected code snippets, and much more.
    • Users can select any text or tab (referred to as “Sheet” in ST) and pass its content to the assistant as additional context.
    • Users can pass images to the assistant.
    • For any request, users can add an additional command or just skip it if the provided content is self-explanatory.
    • Users can use in-buffer overlays as a response UI (referred to as phantom mode) to manipulate with code safely within the so kind of sandbox.
  • Second goal was to provide as much flexibility as possible. Again, the philosophy behind it was “a professional tool should not limit professionals in how they do their work; it should support them wherever they go.” So here we have:

    • a completely modifiable AI assistant configuration, with every useful setting you could imagine (response streaming toggle, different output modes, custom server URL support, customizable UI for assistant details in the status bar, chat layout customization, and much more)
    • connection proxying (I believe I’ve been honored with a medal in a few authoritarian countries for this particular feature)
    • a UX that doesn’t force the user to stick with a single assistant throughout their session.

I think I should stop here, because even now it can barely be considered a brief overview, and I have so much more to say. I hope I’ve convinced you to give this plugin a try if you’re an ST user. I truly believe that once you try it, you’ll never want to go back to the plain old ST experience.

PS: That plugin only came to life because of you folks, who released GPT-3.5 almost two years ago. In just two days, despite having never written a plugin for ST before, I managed to implement the first MVP.

2 Likes

It’s nice… Great work!

I’m mostly working in phantom mode.

[!NOTE] You suggested to bind at least OpenAI: New Message , OpenAI: Chat Model Select and OpenAI: Show output panel in sake for convenience, you can do that in plugin settings.

It would be great to expand the Howto with the description of actions (or provide sample sublime-keymap file:

From my observations:
openai → OpenAI: New Message
openai_panel → OpenAI: Chat Model Select
show_panel → OpenAI: Show output panel
openai {"files_included": true} () OpenAI: New Message With Sheets
openai_panel {"files_included": true} → OpenAI: Chat Model Select With Tabs

My question: Is there a way to send just a selected part from a file + few tabs? I think it will always send a whole tab in this mode and not the selected text, correct?

Few ideas:

Would it be possible to implement a stop (cancel / close) streaming? Sometimes you make an error and you need to wait until all the output is generated by OpenAI.

It would be great if the inline text created by phantom mode allow text selection. That way you could just select and copy to clipboard the needed part and close the phantom ‘modal’.

1 Like

Thank you for such a nice feedback.

Honestly me too stick with phantom mode after implementing it. But it ends up with dozens of opened drafts buffer for me which I haven’t figured out how to resolve yet. Have you faced that too? Any ideas how to organise that better?

I certainly have to update sublime-keymap file. On top of providing clarity to the basics I’ll add there some chain commands that works for me, just to kick off user’s imagination. I’ll make it within a few weeks, I believe.

Speaking about the suggestion. I thought about that, but me personally founds it confusing. The reasoning behind that is that occasionally I found myself passing some selected yet out of sight chunks of text from the active buffer that I selected like 5 minutes before and forgot to unselect. So I’m considering that suggested user flow would only increase that false positive occurrence of forgotten selected text passing from all inactive view groups.

I mean if to implement such it has to check all the active views in all the view groups and pass that in, so I’m having up to 4 that groups in a single window during my usual work, so after that I would have to pay attention not just to a single group or view and its current selection but to all of them, which I consider as annoying.

However feel free to align me with your vision if you think that you have in mind a workaround for that.

1 Like

Hello Yaroslav!

Thanks for response!

Yes, it’s cumbersome. I just do “close all files” at the end of the day :slight_smile:

:+1:

I completely understand. It would be confusing. That was not a suggestion for a workflow but rather request for a clarification :slight_smile: . I’m trying to understand what is going on under the hood.

I noticed that you can use a separate history context for a project by setting it in a .sublime-project configuration.

If I select use “Chat Model Select” and then continue with “New Message” is the history kept and the assistant is aware of the chat history?

Does new “Chat Model Select” command resets this history?

My suggestions are

  1. implementing a Stop to a streaming (If I do a mistake then I can work in the tab until the output finishes)

  2. allowing to select a text from phantom (not sure if this is possible). Sometimes I just need a small snippet from AI’s response.

Thanks a lot!

To not keep you waiting for a long with the those, the answers are:

  1. New Message just initiates the new message action, nothing more nothing less, so no it doesn’t reset the history. To do so you can use "command": "openai", "args": {"mode": "reset_chat_history" } or search OpenAI: Reset… in the command palette.
  2. So the answer to the second related question is also no, Chat Model Select action just initiate model selection and new message action (from point 1 of this list), so all the thing would be stored within the same history u’re using in the second when you’re firing that action. To kick off your imagination, I’m oftenly having the following pipeline: select a few sheets and pass them within the panel mode to o1 to get some ground context persistent, then I’m switching to gpt-4o with phantom mode and elaborate with it the answer byte by byte, this flow allows to lower the tokens amount to be sent to the model.
  3. It seems that you’re struggling to get your history persistent though. My guess is bc of exactly you’re using phantoms. To date phantoms doesn’t store it’s data to the history although they’re reading from it on every request to be sent. I’m currently working on adding a new action to the phantom menu to apply content of the phantom to a given history but it would take a few more weeks here.
  4. Stop should just work on hitting ctrl+c whilst the response is streaming. If it don’t works for you it must be you have some overlapping binding with it, so pls check the [plugin] keymap file and bind the single active binding to whatever keys you’d like to make it working.
  5. [selectable text within phantom] Unfortunately it’s sublime text limitation that unlikely ever to be resolved. So I can’t help here except to telling you that I’m suffering myself from this limitation constantly.
1 Like

Thanks for reply!

Ctrl+C works great, thanks!

Thanks a lot, that’s valuable information!

That’s exactly what I’m trying to do. I’d like to pass two sheets first: one for coding standards and another for the database structure. Then, I want to ask specific questions about particular parts of the code to improve them.

But I’m failing miserably (probably because only using Phantom mode). Now I’ve tried to mimic your approach and configured two assistants:

{
    "name": "Generate Code (4o)",
    "prompt_mode": "phantom",
    "chat_model": "gpt-4o",
    "assistant_role": "Insert code or whatever the user requests, respecting senior-level knowledge in PHP 8.1 and MySQL.",
    "max_tokens": 4000,
},
{
    "name": "Ground Context (o1)",
    "prompt_mode": "panel",
    "chat_model": "o1-preview",
    "assistant_role": "",
    "max_completion_tokens": 4000,
    "stream": false,
}

The o1-preview has a lot of limitations, so this is the configuration I’ve managed to use it with.

I’ve used Chat Model Select With Sheets and employed Ground Context (o1) to send just the sheets. Then I changed the model using Chat Model Select and I asked it to fill in some missing code using Generate Code (4o).

However, it seems to ignore the coding standards entirely (e.g., using a different MySQL driver than the required one). Any idea what I’m doing wrong?

If I stay with the prompt_mode “panel” and ask follow-up questions, it seems to work, and I can ask questions about sheets sent in previous commands.

Btw How did you switch the panel output from the AI Chat to the separate tab instead of having it in the bottom?