O1 model not answering questions (pro sub)

for almost a week o1 model is not answering my questions at all. it pretends to think then responds with : " Gotcha! Feel free to ask me anything whenever you’re ready. " on the PC. I was able to get it to answer questions on the Android app (logged into same account) but it keeps throwing "ChatGPT isn’t responding x Close app (clock icon) Wait

When i click wait, the o1 model actually completes the response but continually throws the isn’t responding error window. i click wait, and it completes but then throws the error constantly.

It seems on the PC (Firefox browser) the error does not even appear, it just fails. 4o model works as it should but i require the o1 to help with more difficult questions.

Also there is no actual support from OpenAI which is both bewildering but also upsetting. I am paying for a service that is broken and you would think they would want to know about it.

What am I supposed to do here?

1 Like

nothing
tons of us are in the same boat. my account won’t even try to engage the o1/o1p models. it simply defaults to 4o. support has been running me through the ringer over and over giving me script after script, no use. people on reddit report it, people on twitter report it, openai doesn’t care.

I’ve seen a few people who got messages like “your access to o1 pro is temporarily limited while we investigate suspicious activity” but that is the exception. more often than not, nobody knows what’s going on and o1 access just goes screwbally

1 Like

interesting.

suspicious activity? I don’t even know how to contact OpenAI. My access is broken but not completely. it seems to be afflicted by a code bug or something. I mean I am paying for this service, so i should have access to report an issue to a human being.

if there was something suspicious, i’d imagine my entire account would be blocked, or is this some kind of shadow ban. if it’s going to stay broken, i should get a refund until it is working properly.

1 Like

It doesn’t seem to be related to shadow bans or similar things, at least from the OpenAi support response, which seems to be automatic responses even if you talk to an operator, or at least predefined pasted messages, I’m not even sure they actually check the status of our accounts.

So far the most response I received was “Retry the request again later, visit our status page to check for any active outages, try another browser, check your internet connection, use incognito mode…”

I hope that if this gets resolved, I suppose it will, OpenAi will have some kind of consideration, since paying $200 a month for a service that has been unusable for so long and without any official response of any kind is quite unusual.

I agree, it is at the least unusual. more like inconsiderate and cheap if there’s no actual way to resolve real issues. The openai model up page shows that everything is up and running, but clearly this is not the case. I just want honest communication

The positive aspect is that it gives us time to look for alternatives, I have found that DeepSeek r1 is “pretty” on par with what o1 Pro (supposedly) should be, at a fraction of the $200 we lose, I hope OpenAi considers this for refunds or something because it really is unbelievable to pay for a $200/month software that doesn’t work.

agreed. Have you tried the API access? I might do that and see if that solves it while i can only hope OpenAI knows about this and is working to fix it, assuming things optimistically vs the obvious pessimistic option.

As for deepseek r1, I’ve only tested the chat with rather complex abstract questions so far with VERY good results, but I still need to integrate it into my workflow with Cursor ai via API, although they’ve already added Deepseek 3 which works decently as well from what I see, although it doesn’t have this “reasoning” capability.

In my case and regarding the current problem with o1 Pro, now it’s not giving me the “Finished Thinking” errors anymore, but it’s practically unusable since it forgets the context very, very quickly, and even if it’s in Pro mode with “reasoning” activated, it only takes a couple of seconds generally to analyze things that used to take minutes (although it never got results because it failed before).

I’ve asked some basic questions regarding his context and limit and his answer really caught my attention, since I remember that with Plus and o1 or o1-mini I could do things like that without problems, with much longer files and and he kept the context in a very decent way, this was what he answered me:

From what I’ve seen in reviews, videos, etc., this shouldn’t be the case. I mean, how can we expect it to solve complex things if we can even give it a pass between messages with so little context?

I understand that the answer says that I shouldn’t send them all together, it’s understandable since it would be 4000 lines of code, but what I’m doing is simply consecutively sending 4 or 5 files with 800 lines of code each, explicitly indicating that we should analyze them together to determine the logical or conceptual problem, and immediately after I send a piece of code it does a basic analysis in a few seconds, I repeat it again with the next one and the same thing, and by the third message it’s not even able to maintain the flow of what we were talking about.

This is something that I don’t think happened even with the most basic models, and are things you can do with Claude Sonnet, Deepsek, Gemini.

There is clearly a problem here.

It is now seeming to be functional again in the OpenAI interface. As for running 5 simultaneous queries, that seems interesting but also not common. i’d consider that a bit extreme and would be better done on a local system if you can afford one powerful enough, or set up a gpt that has all the data to look at, all at once. Memory is truly the largest roadblock to most efforts, but i’m exploring that solution myself with GPTs. I’m no expert, learning as I go.

Having 01 down is rough, but it shows how helpful it is to have. I don’t need it as a chatbot, that seems unreasonable. it’s like a pocket expert that i can get help from once in a while. As for Deepseek R1, i’ve been considering it but i’d have to spend a bunch of time re-tuning it to get rid of any CCP influence. i’ll re-assess the next level in about 6 months time as for local open source options.

So thank you to who-ever at Open-AI fixed the issue (assuming it stays functional).

I’m glad you were able to fix it in your case.

Maybe I misspoke, it’s not really 5 simultaneous queries but rather trying to get enough basic context to be able to analyze, just like I was doing with o1 before upgrading to Pro, that is, I never used o1 as a base for my workflow, but I did use it for reviews or more abstract considerations, and that’s what I totally lost when upgrading from Plus to Pro.

I’ll make a small update, sorry for the off-topic intrusion of the current thread, but since we mentioned DeepSeek R1, Cursor AI has just added it to its 20/mo plan in the latest version of Cursor (0.45.1), at least temporarily, and its interaction with complex projects of thousands of lines of code in different files and its level of response, being able to understand the total context in an excellent way, is really surprising.

Watching review videos comparing complex prompts between o1 Pro and DeepSeek R1 I also see that it is on par and often surpasses it, in quality and processing time.

For my part, I am done with my attempt to get o1 Pro to work, I have already wasted too much time, I will now try to get a refund for the Pro and switch back to Plus for my wife’s daily tasks, or possibly nothing, but no longer for logic and programming tasks since the combination of Sonnet 3.5 and DeepSeek R1 is superior to me, and definitely $200 a month is not worth it at all.

At least it helped me to learn how OpenAi acts when a user who pays $200 a month for a subscription has a problem that makes the user never able to use it for basic things since upgrading, and the total lack of interest they have in their customer support, which keeps suggesting that I change the DNS, check my internet connection, and other similar responses. I never got a response via email, and the support chat answers every few hours, I’ve been at it for days, with these types of suggestions.

Absolutly the same for me. Tthe problems happens suddenly. And after that using o1 model in same chat returns nothing, don’t mater how many times you will try or after what delay. If change responce to o1-mini or any of 4o the AI responce well.

The probles is if I pay money I expect that it works, or if not at list let bot say that limits out , try after X hours. Non the mysterious “Thingking”

I still have my subscription active until next month but I have completely cancelled the renewal, as I still haven’t managed to get support to look into the case. I won’t even pay for plus anymore.

I recommend trying DeepSeek r1, but the model with 671B parameters, if you have concerns about privacy or Chinese interference you can do so through US companies like fireworks.ai that uses their own servers in the US, and there will be many alternatives soon as everything is speeding up rapidly.

Integrated with Cline or Cursor, or other of the many alternatives already out there, it’s a game changer. At least in everything related to code and technical things, perhaps ChatGPT / OpenAi are still superior for writing or language-related things, but DeepSeek is on another level technically.

This, added to this terrible experience with OpenAi in my case, makes it no longer worth giving it a chance.

1 Like