GPT Pro 200$ - o1 o1pro (The message you submitted was too long)

Hello, I subscribed for $200, sending a long code used to be no problem, but now the o1 model has stopped thinking like it used to, and o1 pro, I send a code for the model in the new o1 chat 1171 lines long, and in response it says - The message you submitted was too long, please reload the conversation and submit something shorter. It feels like the $200 subscription is simply limited and doesn’t work now, this happened after I worked for 3 hours straight, I received a message that we are currently limiting o1 o1pro models and will check for abuse. After the account was unblocked, the models are behaving strangely, they don’t generate long text, before o1 could easily generate a 1300-line code and accept it too, pro mode could display that it was starting to think the loading bar, but now this doesn’t happen. And very often, in response to my requests, the models answer - Something went wrong. If this issue persists please contact us through our help center at help.openai.com.
Got it! Let me know if you need any further adjustments. Instead of responding to the request, please help.
Please help fix them finally, in status.openai it shows that there are no errors but at the same time it does not remember the context, sometimes it generates nonsense in response. I use the original app for macos and in chrome or safari browser.

O1pro has the same problem now, when sending the same code it says - The message you submitted was too long, please reload the conversation and submit something shorter.



Loocs like the issue is resolved. All models are working, and large code blocks (over 1000 lines) send fine in the iOS app and browser. Not sure if it was posting on the forum or emailing support multiple times, but it’s fixed now.

1 Like

This doesn’t seem to be the case- as I’m still being limited (when I choose o1-pro (I’m on the $200 plan) it continually limits me to the gpt4-0 model- it’s really a shame to spend so much money per month but get bumped down to what is essentially the free model

1 Like

I have a similar problem! The $200 model produces response results 100 times worse than the 4o, and I am also limited in the number of lines I can send, experimentally I have determined that ~600 lines can be sent!
I’m having this problem for the second day, and it was fine on Thursday!

Same issue here. Kind of useless, if the context window is going to be limited to this extend. I hope it’s a temporary issue.

1 Like

Same problem, both in o1-mini, o1, o1-pro, this did not happen to me with Plus, it seems like the $200 plan is increasingly limited, I don’t understand it.

same issue, now normal 4 model can accept size of prompt 4o1 cant !!! 4o1 a week ago could handle so many than normal 4 model, seems like open ai servers got a little hot

I’ve been dealing with the same issue for over 4 days now (even with messages that are only 10-12K tokens), it’s making the o1 pro ~ o3 mini high feel pretty useless (fun fact is only GPT-4 Legacy works, not even 4o) especially since I rarely use image generation or Sora. Right now my subscription is just sitting there, unused…

I thought it was because of rush hour so I kept trying at different times throughout the day, but it doesn’t really seem to be connected to it.