Impossible to work with o1 Pro, anyone Else Experiencing This?

A few days ago, I decided to upgrade to the Pro plan, and since then, my experience has been significantly worse compared to the Plus plan, to the point of being almost unusable. Below, I detail the main issues I have encountered:

  1. Reduced Context Limit:
  • Unlike the o1 and o1-mini models in Plus, the context limit in Pro is significantly lower.
  • This causes the chat to lose track of the conversation every few messages, making it necessary to repeatedly explain the same things and losing the flow of the discussion.
  • This problem persists even in new conversations, after very few interactions.
  1. Errors with "Long "Messages and “Finished thinking”:
  • When attempting to send messages with more than 500-600 lines of code, I often receive an error indicating that the message is too long and to try again.
  • When this error does not occur, the phrase “Finished thinking” frequently appears, halting the conversation without providing a useful response, it just returns that, and even if I try again x times the same thing still happens.
  • Occasionally, instead of “Finished thinking,” I receive a direct error message asking me to try again.
  1. Instability Across Different Modes:
  • I’ve tried the o1, o1-mini, and o1 Pro modes, both with the “reasoning” option enabled and disabled, but the problem persists in all of them.
  • Switching models within the same conversation does not resolve the issue.
  1. Persistent Issues in New Conversations:
  • Even when starting new conversations, errors appear quickly, regardless of the accumulated size of the messages.
  • I have tried recommended solutions from other posts, such as clearing the cache, without success.
  1. Comparison with the Plus Plan:
  • These issues did not occur for me with the Plus plan and the o1 limited models.
  • The user experience has drastically worsened after upgrading to the Pro plan.

The most frustrating aspect is that I’ve paid ten times more for a service that, instead of enhancing my experience, has actually made it significantly worse. I was four days trying to regain a workflow similar to the one I had with Plus, but it’s impossible to achieve. This issue not only impacts my productivity, but also makes the additional investment feel completely wasted.

References to Similar Cases:

Beyond the fact that I can’t get it to work on o1 (not Pro), I understand that the way o1 Pro works is different, and for some tasks, o1 should be used. However, if this o1 Pro mode is designed to tackle complex problems, how is it supposed to work if we don’t have the minimal context to explain the issue?

I am surprised by the lack of similar complaints and am unsure if this problem affects a minority of subscribers or is more widespread. It is unbelievable that, by paying more, the service has actually worsened.

Is anyone else experiencing these same issues?

5 Likes

I have same problem. It is degrading day by day. After release of Deepseek R1 it has significantly degraded in this capabilities. It keeps throwing errors and not responding to queries.

4 Likes

I have never had this problem ever with $200 USD o1 Pro as seen in this images:

And the code btw is 800 lines of python code. Really? $200 a month for this.

I was insisting a lot, trying many things but I never managed to get it to work properly.

Finally I think it is a problem that occurs in some accounts, something particular with these accounts specifically, and since it is not something massive it does not scale enough in support I suppose so they do not fix it, it is strange.

Finally in my case, to be fair, they finally refunded the proportion of money I had paid for this, which did not work, and I simply started using other alternatives.

Now I went back to Plus, just to see the progress of ChatGTP since I use it for secondary things and really very little or nothing now, and it works what did not work in Pro. It does not make sense but in my particular case it was like that.

2 Likes

The quality of answers has decreased very much! And this is the second time during my subscription period, for what I paid 200$ is not clear at all!
If the first time, I accepted it, then the second time, it is simply unacceptable!!!
How to demand that OpenAi promptly restore the work of models?

I guess they are more interested in offering the new o3-mini to the general public and so on at the beginning and allocate more computing resources there, I’m just speculating.

I also never managed to understand how they gave so little importance to users who paid the $200 subscription, either by support chat or email, they were not interested, at least they accepted the refund. But I have never had such a bad support experience with software or services paying that amount really, I think in 20 years of working in the sector.

2 Likes

Yes, have been dealing with this same thing as well. I’m having to do stupid workarounds to get it to output any code over 200 lines.

I’ve paid $200 a month since it came out and it really seems that nobody from OpenAI is listening to their customers concerns?.

Yes, it is really a shame, because they have not yet commented on the issue and although I have unsubscribed from the $200 subscription, I would be interested in trying it again for the deep research, but seeing that the problem still persists, I would not risk it, and instead I am looking for free alternatives that already exist that also do this :man_facepalming: