So, since o1 and o1pro have been removed (I use primarily for coding). I am already having issues with o3’pro’. When using o1, it often and easily gave me the full code for multiple files I was working on. Now, o3’pro’? It explicitly responds with “I can’t do that, because it’s too big, I can give you chunks over messages”. And even then, it gives me malformed versions. So, essentially, the only useful thing about pro is now gone.
Please drop your recommendations for providers/models that can code like o1/o1pro
Are you using ChatGPT web interface or API? That’s a bummer to hear, I don’t know if the web chat gives a “max output tokens” option, such is available in the API…
Back in the day, when I was using the web portal for similar activities, I would just ask the LLM to break up the code file into multiple segments (say, 500 lines each?) and then provide them in subsequent chunks. What length of code file are you expecting to receive in the single shot? Through API, I’ve only be successful up to around 1000-1500 lines for a single file.
No specific length. I’ve used o1 daily (since it became available)for complexish code. So, this is the first time the model (o3) has said something like “i cant do that”. It wasn’t even a very big change. So it’s really got nothing to do with that. It’s about the nerfing and removal of useful models without really any explanation and replacement with sub optimal models. Obviously if that trend continues with openai it’s a bit concerning. Just looking for something like o1 without having to use an api. o3 really isn’t good
But have you tried gpt 4.1? I use that for almost all coding and I find it to be much preferable over o3, though this is before the most recent update, I haven’t tried o3 since then. I switched over to 4.1 as soon as it was available, though I DID like o1 for everything up until that point, but had used o3 mini whenever possible to save cost. At this point, I shoot for 4.1 for almost everything except the most highly complex and difficult tasks.
Oh, if your using free chat GPT then of course you are going to run into output limits for o3, even if they are giving that model access. Yes, definitely upgrade to plus if you want to use the real models for real work - free version is continually always going to be more and more limited as models get more and more powerful, because the free version costs money for those running the system!
If your using ChatGPT pro web interface, then obviously you have access to GPT4.1, so why would you ask if that’s a “plus” model? All models available on Plus are available on Pro. 4.1 is the flagship model previously recommended for all coding tasks before the new release of “cheaper” o3.
I personally still think 4.1 is the go-to for any coding tasks that don’t require massive reasoning capacity. o3 is like dropping a nuke on small town - way overkill. While 4.1 is more generalized and therefore not going to give as highly focused output as o3 is, I actually think 4.1 is directly comparable to my own previous use of o1 far more so then o3 is. I haven’t liked the way that o3 outputs honestly, but I love 4.1, and I like you previously used o1 for all coding tasks before 4.1 became available a few weeks ago.
LIKELY if you are paying for Pro but o3 is giving you limited output token options, this is probably the classic situation of OpenAI tiering their role-out because otherwise their webapp servers get overloaded with people using the new models all at once. Though I would expect that pro users would have gotten access first and unlimited at that… so that’s pretty strange… might want to clarify:
Platform you are using (ChatGPT PRO account)
Model you are selecting (o3 pro?)
What is the total token count of the input you are presenting in the prompt? (use tiktokenizer if you don’t have that data coming through in the webapp elsewhere)
Do you have “documents” mode turned on or are you just getting the result purely in the chat?
Have you tried modifying your prompt slightly or asking for chunked output of the file and seeing what the maximum output tokens you can get is?
@edwinarbus if it’s a real issue that’s known, one of the staff will know and likely mention they are working to fix it.
Are you a plus user? All your suggestions are basically trying to nullify what my thread is about - o1pro being removed, and the replacement model being sub par. None of your suggestions are meaningful because I never had to consider those things in my normal workflow. How can you argue about pro models if you aren’t even a pro subscriber? The service is supposed to improve right, and you’re suggesting I now have to do more to get the same result?
Anyway, I’ve been using o3pro all of today, and while it initially said it didn’t want to give me the code for multiple files, in consequent responses it did give larger outputs, however it seems less intelligent then o1 in general. The reason I highlighted 4.1 being a plus model is that, as a pro user, it makes sense to use a pro model for coding. That’s what I’ve been doing for months. I would have kept using o1 which was less reasoning heavy but it was removed before o1pro was removed. Still vastly superior to any other model I’ve tried.
No, I’m an API user. I’m not trying to nullify what your thread is about, I’m trying to suggest a pathway forward for you to use the system as it now stands, because unfortunately complaining about it doesn’t do anything. Best of luck
Well, since users are where openai gets their money, if they repeatedly do things the users don’t like, they lose money. So i’d say complaining is the only thing that works when it comes to services and products as an end user. But thanks for your suggestion, I might try 4.1. Primarily looking for an o1 equiv though.