o1-Preview is the way to go for coding!

It works like a charm. It’s very powerful for coding. I managed to solve complex problems that other models couldn’t and would just go in circles.

Here are some tips:

  • File uploads are not available now. Don’t let that discourage you. Just copy and paste your code and drill down on your prompting.
  • It maintains context much better than other models, especially on complex multilayered coding tasks.
  • It doesn’t get confused and significantly changes your code like other models.

It is worth sticking with. I won’t use anything else for coding going forward. The only challenge is overcoming the inability to upload files. That will take some patience, but it is worth it because of the quality of the responses you get.

On complex coding, o1-Preview is the way to go. I have not used o1-mini, so not sure how that one measures up.

8 Likes

I have a regular paid ChatGPT and I fully used my credits (80k tokens in one evening. And now, two days later I can use it again. So, yes there is always more and better. But it is also a preview.
Also - at this point it is less suitable for API because of the speed (At least for things like coding).
So enjoy this amazing stuff, the pace of progress!

1 Like

The speed I’m use too as I do run some models locally for certain tasks and for some use cases even coding I do create agents. It all depends on what I’m trying to do and what problem I’m trying to solve. I even have models check another models work. It may be a “preview” BUT access is not equal to all like it should be.

I have had some good luck using it to create some SQL queries. It has done a great job and is much quicker than me searching for examples…

1 Like

It’s good on coding tasks when you ask it to right code based on a prompt or a problem provided to it.

Bssed on my experience, it is equivalent or worse than when attempting to do code completion with it. Makes some silly mistakes sometimes

I’ve been working on a project to simplify my own coding workflow with LLMs (having to copy paste code into the chat). I’ve been planning to release it publicly in the near future, but you can get it now.

I basically paste my entire codebase in one operation into “Knowledge” and work on each task in a single chat. It’s shocking how effective it is. Haven’t really worked on very large repos with it yet (where the codebase is larger than the context window), but am encouraged by the results so far.

Happy to get your feedback if you wind up using it.

2 Likes

I agree, o1 is amazing. An presenting 700 lines of code in one go is not a problem :smiley:

Haha, I’ve actually created a similar tool for myself. Its a great idea.

1 Like

Yeah. I finally created the tool after being dis-satisfied with all the IDE integrations. I tried Aider (not IDE, but automated), Cursor, Continue and Rubberduck (contributed code to the latter 2). Not knocking those projects, they’re all fantastic (and likely much better now than when I was trying them out)!

I found that the value I get from a great chat experience is greater than the value of automatic diff application. If anything, having to copy paste code from the chat to the IDE, forces me to look more carefully at the code than if it were automatically applied.

And going by the results I’ve been getting, most of the value is coming from the model itself. When you put your entire codebase into the context, it’s possible to give very high level instructions and quite quickly converge to full solutions.

1 Like

@combyses Based on the need to use o1 without File support, I added this feature to GitHub - cyberchitta/llm-context.py: A command-line tool for copying code context to clipboard for use in LLM chats and got v0.1.0 out the door as the first public release.

So now you can include the prompt in the context. In practice, this means you can send the prompt and all your code to o1 in the first message, and work with it in subsequent messages.

Happy to hear feedback on the tool.

1 Like

o1-Mini has been quite good too and it costs much less. I’ve been giving the more complex stuff to o1-Preview and using o1-Mini for the output length advantage it has. I’ve been combining both together for great results. One thing that helps too is having gpt-4o for tool calling and taking the tool results from gpt-4o and passing them into o1-Preview or o1-Mini.