Is the issue now fixed for you? I still have that issue. I’ve noticed that using INSTANT mode generates output tokens really quickly, but THINKING modes (both 5.1 and 5.2) are really slow. It creates a letter by letter.
Did it fix for you now? Auto/Instant modes work fine for me, but Thinking mode isn’t working as slow as the one, you’ve shown in the video. Should I just move onto Gemini? I liked Chatgpt more than Gemini because I felt that its answer is more accurate, but now it is too slow to use.
I’ve had exactly the same issue for about 5–7 days already. In “thinking” modes it types very slowly, letter by letter. I cleared the cache, checked on different devices and browsers, logged out and logged back in the result doesn’t change.
I hope the problem will be resolved soon.
Quick update: still slow on my side.
Tried cache clears, full logout/login, multiple devices and browsers. Auto/Instant modes are fine, but GPT-5.2 Thinking remains slow, similar to the video. There are minor time-of-day fluctuations, but no real improvement overall. Seems like it’s still affecting some users.
As a Pro Plan user, I have the exactly same experience with 5.2 thinking model(every mode) for 1.5 weeks and I think I am switching back to gemini AI Ultra Plan.
How many days did the problem stay
How many days did the problem stay
From around jan 28th to feb 3rd.
I’ve moved on to OSS models for now. My concern about relying on OpenAI is they can shut this off again in the future. Don’t like being in that situation.
I’m curious — what specifically are you concerned about?
When you say they could “shut this off again,” do you mean performance variability, access restrictions, pricing changes, or something else?
Would appreciate if you could elaborate a bit.
I suppose it depends on why there was this issue. What settings did you put in personalization? I listed mine above.
Ah, I see what you mean now.
To be honest, I’m just a regular user and I use ChatGPT in a fairly standard way for work. My favorite feature is actually its search capability through GPT-5.2 Thinking — that’s where I find it most useful.
Even though the response speed hasn’t really improved for me, I’ve gradually adapted to it.
Personalization was something I considered as well. I’ve turned off “Improve the model for everyone,” disabled features I don’t use (like advanced voice mode and audio history references), and removed unnecessary connections.
At this point I’m mostly observing and waiting to see whether the situation stabilizes or improves over time.
Did you try turning this on? It was off for me as well. Maybe it’s why I was rate limited.
Me too, I have started a new thread . Its a little faster but then I started getting error messages again.
I’ve noticed it running slow for about two weeks. Only anecdotally though. Gemini is as well.
I do use it for code most of the time but it is very basic stuff. One thing that has helped me in situations when slowness was caused by lengthy thread vs busy server is to ask for a summary of the thread with emphasis on tech/info architecture details to use as the intro to a new thread so you can hit the ground running and, hopefully, not trip over problems that were already solved.