O3-pro (o1-pro) response length is too short. Please increase it

I know that o1-pro is now o3-pro (even though the interface still shows it as o1-pro). Its responses have become very limited compared to the previous (legacy) version of o1-pro.

Please include in its instructions that if a user directly requests a longer answer (for example, ***** symbols or words), the length limit should be increased accordingly.

I understand it’s not a bug itself, but for users who became accustomed to longer answers from this model, the current limitation is quite frustrating. Longer responses are now available only in the 4.1 model, but unfortunately, it is less powerful.

3 Likes

The length of it’s answers was the reason I signed up for the Pro-plan in the first place, if OpenAI doesn’t fix that, they lose me as a customer.

4 Likes

o1 / o1 pro were such better models than o3 and o3 pro.

I signed up to Pro to access these better models, and now they’ve been removed.

Chat GPT is now useless to me and my team, we may as well use Gemini which is free in Workspace.

I was a huge advocate of this brand from the beginning, but now I’m at the point of leaving.

4 Likes

I also am finding frustration in the response limit when the change from o1-pro to o3-pro has been made. I really want to still use the o1-pro model and it’s output limit. I am genuinely thinking of cancelling the service if I can’t have the output limit increased. It was the main reason I paid for pro in the first place and it’s been extremely frustrating having o3-pro give worse output when updating longer versions of my code. This feels like I’m using the regular plus version last year all over again.

Please anyone at OpenAI, can you give me a reasonable solution for this issue? We’re paying so much to be on the cutting edge, not go backwards. I love the product and would hate to find its functionality reduced significantly leading me to try other solutions.

2 Likes

Canceling my account today. There’s no advantage to using Pro anymore. It takes way too long, and I’ve found no evidence that it’s better than other models anyway. I can’t find a single reason why anyone would keep using Pro over the other models.

1 Like

yes absolutely letting my subscription run out, couple of months ago with the 01 models i could easily code with inputs and outputs of easily 1000+ lines, suddenly 2 months ago or so they scaled down the output limit, not a single version could handle even more than 600+ lines then it would cut of the output or tel me (rest of the code) cant even ask gpt to fix something in my whole script for me anymore, i searched on the web for the output tokens, got a email from the dev team of openai that in my usecase teams would be a good options since it had o1 in the package, im a solo developer and neede to buy 2 accounts A, 70 euro’s i tought maybe then it could handle some larger codes, but to my surprise.. nothing. 70 euro’s plus 25 for the current running better gpt atleast like 100 euro down the drain, now? it cant even handle simple python scripts of 250+ lines and it wil cut of the output. tf is openai thinking? sorry but im going to claude, claude easily puts out 2k+ lines of code even with changes i ask. i really liked chatgpt but this killed it for me and chatgpt is literally uselles for me now. even the 03 pro model as im writing now is taking 15+ minutes to anwser.. output? Cutoff… 250lines of code xd what a joke.

2 Likes

Completely agree with everything said here. I also use it for coding, and with o3/o3-pro I’m noticing shorter code outputs (functions becoming simplified, etc.), compared to the longer outputs in the previous o1-pro. This change impacts not only content creation but coding as well. Hope OpenAI will fine-tune their default settings back to something more practical.

I also tried reaching out via the live chat at help.openai.com. After requesting a live operator through their chatbot and explaining the issue, the operator unfortunately responded with irrelevant suggestions—like my account possibly being restricted for sharing content—which clearly isn’t the case. After this unhelpful reply, they stopped responding altogether. The real issue is simply the default settings of the customer version of o3/o3-pro. I’d encourage everyone experiencing this issue to voice their concerns through the help.openai.com live chat—perhaps together we can get OpenAI’s attention.