Note this was green while I was seeing the 4tps issue. Now it’s yellow, but 14 tps for me. ![]()
The ChatGPT “issues” - or issues being finally paid attention to - is only in the last 2.5 hours, and “resolved”.
Can also confirm that doing the similar types of tasks, thinking is from 30 minutes down to 10-15 on “Heavy Thinking” - so either ChatGPT 5.2 Thinking’s been sped up or is reasoning less. We’ll see if it keeps dying on “stopped” thinking, or if it is nerfing of max tool calls to get you onto Codex.
I’m still waiting on Thinking, while Pro started and ended something different in 25 minutes. And after more sitting around, another bunch of reasoning tossed in the dumpster that the AI can’t continue with:
If an API model thought this long, the code interpreter container might expire before it was even used…
I can post “Pro” subscription failures on “ChatGPT 5.2 Thinking” all day, it seems…
The IQ Test | Tracking AI (need to scroll down and select GPT 5.2 Thinking) is interesting, imho. They seem to be doing a pretty good at constant tracking. It caught these outages, so that is some proof of effectiveness.
Let’s see if it recovers after the outage is resolved.
yep I also saw this yesterday (slow output token generation) and it seems to be resolved now. Still unclear to me whether this has something to do with the outage today.
Huh. I would have made it 40% smarter, but that’s just me. I guess some people were complaining about latency versus gemini.
Well, at release, they did make gpt-5.2 40% more expensive per token on the API than GPT-5.1, so maybe they just turned off the 40% extra computation and charge the same.
Make the logit dictionary opt-in; choose your subset of world language tokens.
Well, gpt 5.2 thinking recovered, but pro is still low. Give it a day or so though, might come back like 5.2 thinking. IQ Test | Tracking AI
I am still experiencing bizarre slowness when interacting with gpt-5.2 in thinking mode, both the standard and extended modes. Same phenomenon on my MacBook Pro and iPhone.
Normal mode (fast) just fine but the thinking modes just feel laggy to an extent that really annoying
MacBook Pro
OS: Tahoe
version: 26.2
iOS
version: 26.3
If you can time token generation after start of first token (so not including thinking time, which is a different kettle of fish), that would help give some specifics on the problem.
The modes are Auto, Instant, Thinking, and Extended Thinking on my web client. When I saw the problem, it only occurred on Thinking and Extended Thinking.
yep, same experience. thinking and extended thinking
that alright. ChatGPT is not the only choice I have. Just wanna know whether I got company 0.0
Fwiw, the issue seems to have gone away for me. It lasted about Jan 30th - Feb 3rd.
Good to hear it’s resolved for you.
On my side, GPT-5/5.2 Thinking is still very slow (token generation feels unchanged).
Seems like this might not be resolved uniformly for all users.
There are people here experiencing the same issue. I have been dealing with it from January 31 up to now. OpenAI Support replied to my emails the first few times, but now they are not even reading them. My subscription billing date is approaching, and since this issue may not be resolved anytime soon and support is not responding, I have decided not to continue this account’s subscription. I have two accounts, and because this issue occurs on only one of them, it seems to be a problem with specific accounts rather than with the user environment. OpenAI should conduct a thorough root-cause analysis and implement measures to prevent this from happening again.
Yeah, it’s strange. I am pretty surprised how much h8 there is for OpenAI in subs like this - Reddit - The heart of the internet It’s very hard to find anyone who wants to defend the company versus Anthropic these days.
Codex was released today, and the mods deleted the announcement - https://www.reddit.com/r/singularity/s/YNLILZlWlr
Normally you’d think there would be an outcry about something like this. Nobody seems to care.
10 of the last 11 or so posts, are all about Opus 4.6
If you compare the sentiment expressed on r/OpenAI versus r/ClaudeAI (similar number of weekly visitors), it’s quite stark.
Which saddens me, because OpenAI is the only major AI company 100% controlled by a non profit. That is a very big deal.
I am a bit worried that the ad push will deprioritize compute for things like the Plus subscription. It could be that OpenAI will be more ‘for the masses’ and Anthropic will be more for AI first users - and I will have to move to Anthropic or Gemini, both of which I find somewhat disagreeable for various reasons.
It’s sort of reasonable considering the non profit mission of OpenAI is to bring AI to everyone and catering to technical elite might not be their priority.
We may also find the market bifurcates into ad supported or API based pricing. Plus sub is problematic because there is no direct connection between usage and income.
Hopefully they fix this bug for everyone.

Sharing a short screen recording of GPT- 5.2 Thinking response speed on my end.
Token generation still feels very slow.
For others using Thinking mode right now — does this match your experience, or are you seeing normal speeds?
Thanks, I wanted to do that as well. That was exactly what I saw.
Some questions - are you technical? Did you ask for a lot of code to be generated? I wasn’t using it that much, though I did do a surge of about 20 or so images to generate.
I also asked it to do some code to generate a fuzzer. The idea I had was fairly advanced and successful, and I wonder if they rate limited me because of it.
I was also asking about trends in using LLMs to replace clouds like AWS, wondering if Europe could potentially use it to build their sovereign cloud and what type of issues there are about with data labelling and leaking intellectual property abroad.
I really wish we could get to the bottom of this. This type of rate limiting is quite dark in what it portends to the future and very worrisome.
If I had to guess though, I think it’s just a traffic shaping rate limit and we’re not seen as ‘profitable’ subscribers. They should document this, however, so we know what to avoid.
Also, have you tried clearing cookies and logging in and out?
I realize now it went away after I did that, but not immediately when I tested it right after - so I didn’t mention it. Maybe it is just a weird bug after all.
Thanks — I tried the same (cleared cookies/cache + signed out/in), including signing out of all sessions across devices. So far, the speed has been slow on my end (token generation speed seems to have changed slightly faster in GPT-5.2 Thinking)..
One extra data point: I noticed the ChatGPT release notes mention changes to GPT-5.2 Thinking time settings and a restoration on Feb 4 (after an inadvertent change in January). It makes me wonder if the rollout isn’t uniform yet, or if my slowdown is a separate issue.
Any resolution to the issue.
I have been suffering with this problem too. What will help?
Hey, did it eventually get solved for you.
If it did, please also let me know how. Would appreciate a reply
Yes it did. Wonder if was resolved for anyone else?





