o1-Pro is trying to ruin me

My o1 regular works great, absolutely no issue there at all. Yet at least.

o1 Pro in the first 2 days of purchase wow

o1 Pro now… dumb as muck.

What’s going on? I received a false flag whilst feeding Pro my coding debug logs. And it said I broke terms somehow. That was 1 week ago, and the only consistent thing in my life is that you can rely on o1 Pro being dumb as muck. It’s unusable. I got in touch with OpenAI, but yeah you will wait days for a SINGLE little reply which doesn’t answer your question. Then you reply back, tell them the story again and wait for another few days (praying this time they’ll actually read your latest message fully) and the same thing happens again.

So here’s where I am. I moved over here from Anthropics team plan, been using Claude for the best part of this year, had ChatGPT team plan, as well as Plus. Signed up with poe.com where I pay an annual plan. Been using LLM’s, self-hosted and commercial for past couple of years. I use them exclusively for coding. Not for role-playing, not for any other mental stuff - for coding. Exclusively coding. Okay?.. Good. Therefore I know that any flag that pops up is a false positive. Don’t get me wrong, years ago I would play at breaking ChatGPT’s walls but it got boring because I found it too easy to circumvent. The fun novelty days.

So here I am, just forked out $240 for Pro. Works absolutely magically for 2 days. Then a flag shows. Then flat lines. My Pro suddenly wanted to defy everything I asked it, do the bare minimum, ignore key directives, and be a pia. OpenAI have activated something in Pro which has turned it into the employee from hell. It tries to sabotage. It doesn’t do it’s work. It doesn’t matter how many fresh new chats you use, this thing is out to waste your time in every conceivable way. I know that o1 Pro’s system prompt is either to act dumb, or they have literally switched Pro to 3.5. That is how bad it is. I am sure that OpenAI staff / engineers have a good laugh to themselves setting unleashing a system prompt like this on people, but it’s not funny when you have just paid $240 - it then feels like a nasty fraud.

If you guys thought I was using o1-Pro too much in those first couple of days, fine you could have simply put me on a cool-down… Because I agree, I was using it to comment huge pieces of my code. I made good use of it in those first 2 days. It was surreal to me how much of a jump it was from Claude’s Sonnet 3.5, but nope… You guys have labotomized it, never told me you did so and now made me a voice of discontent which you can just ignore. One little voice, one massive company. I won’t ever be able to fight you guys, but know this - OpenAI, you lot are unethical. And to have you play these kinds of games with not just people’s minds, but with their wallets (and purses), is just nasty. I will not trust you guys to be responsible with AI into the future, when we are only a few years on from chatgpt hitting mainstream and these are the kinds of things you do with customers. :-1: :-1: :-1:

1 Like

I was about to upgrade to the Pro plan, but literally opened the forum summary from the weekly email just before. Now I’d like to hear the opinions of other Pro users, preferably not just “programmers”, but someone who normally uses it for “language” stuff - I usually write legal texts. Thx!

Just hold on. OpenAI is still releasing a bunch of new features that may confuse o1. It’s a very rocky road right now but it’s sure to smoothen out. Hopefully once all the releases are out.

I subscribed pro. As an applied math researcher, I use it to summarize papers and also simulated real world scenario for me to do decision. For the second part, it is useful but can output many wrong typos and the accuracy is still bad for critical work. If you are looking for low stake work or creative work with some reasoning, it is good. The best use case here is still hack kids’ university homework.

Do you think that these issues could be solved through an iterative workflow? As in, having the model (or another model) double-check the work and fix any slight issues?

If the question means whether any current industry’s approach could resolve this issue, I think it is impossible.

The current models are outputting answer through probability chain (guided by human expert and training data). There are too many nuances in real world which could make it fail. O1 pro can stumbled on some very easy question like debugging latex code. The “model” they need to improve this is human…

I tried to ask it to make things “accurate” by add more context and nuance. But unfortunately, the improvement is limited. The better practice for me is not improving the model accuracy but explore possibility or asking how the model “ think”. I know asking the real cot process is impossible…

The technology is still impressive and I believe it indeed had some intelligence and great potential in business.

But as end user, the best approach seems to treat it unintelligent but useful. Some business people who promote AI said that the current AI is providing human expert’s level, analysis and service in large scale. If they add words “based on probability”, that would be best description.

I hope openAI put more focus on improving service and products, instead of creating “AGI”—-using complex engineering tricks to beat those benchmark—— which consumes the resources their customers need.

1 Like

You read my mind! So glad you said this. I started off thinking 01 Pro was amazing and worth the money. But it’s hallucinating, making up functions i named but never shared, instead of asking me how i use them it just rolls with the fake functions it made up. It thinks it knows and just goes with it. Gives me a wikipedia page full of all this crap when i just want it to help me externalise a function. It’s as bad as GPT40… I may as well go back to using 01-mini… But why am i paying $200USD/month? I can cancel… But what that amazing thing i was using for a few days at the start… I can feel they nuked it… But why don’t they just come clean and admit it. $200/month is a LOT to me.

Exactly same experience, it was good the first 1-5 days , then it feels extremely dumbed down, they are playing with the compute , saving costs

I killed my Pro subscription, because i didnt trust it. I then tried teams instead and it is also dog****, dumbed down. The folks at OpenAI have targetted my account with a dumbed down llm which is unusuable for coding. I am having to use llama, deepseekv3 etc. I think there is a ‘blacklist’ being shared because it is the same with Anthropic now, it’s just ruined. The only way I am getting round this stuff is by using Chinese LLM’s and self-hosted LLM’s, if I use OpenAI’s it has been labotomised. If i create a brand new account at OpenAI with a different email address and different debit card details it works great for first 48 hours, then like clockwork it goes back to being dumb. I have tested out this hypothesis 5 times now and it’s always the same. If there are any journalists, I bet there’s a dark story here. Something is going on, but I am too busy to investigate it further. Luckily Deepseekv3 isn’t trying to sabotage my code or play dumb.

1 Like

We should demand some transperency, to see how much compute per user and how much we get

1 Like

The model is not trying to ruin you. It may be none of my business, but I don’t think what’s happening in this thread is right. Please listen to me and try to calm down.

The way some of you are talking suggests that you are under a lot of pressure, maybe to deliver results or make money, whatever it is, but the fact that o1 is not working as you expected is not the cause of these problems.

Please know that I’m not trying to talk down your frustration – I can literally feel the tension and frustration in all your words. I understand. I get it.

Let me try to take some of that stress away from you, so hear me out.

Not even two years ago we didn’t have anything like this, and now it’s ruining your lives because it doesn’t work as expected? I’m not saying this to shift the blame, but you have to accept this if you want to be able to solve your problems.

The more you focus on the AI not doing what you want being the problem, the more you lose focus on the actual problem or task you are trying to solve.

Ask yourself: how would I solve this if I did not have AI?

Some of the comments I see are deeply disturbing. Try to reason in the same way that you expect the AI to do it – do you really believe that OpenAI will maintain “blacklists” of individual users in order to micro-manage their computing resources?

Do you really think that a large part of their effort goes into designing systems that pick out individual users to deprive them of compute, while others live the good life?

That wouldn’t make any sense.

No one in their right mind would invest time and resources in designing a system that micro-manages computing resources in this way. We have a number of real-world examples where people have tried to solve problems in similar ways, and every example has been a catastrophic failure, so why would OpenAI, which is filled with bright engineers and highly competent people, go down this road?

If there is anything going on “behind the scenes”, then it will be effort upon effort to try scaling as good, and fast as they possibly can, so everyone gets the most out of it.


Don’t get me wrong – you should definitely leave feedback and talk about your experiences with the new models, but if you want to be taken seriously, you can’t give in to paranoia and conspiracy theories about being singled out. This fear is only hurting you, and you don’t need this pain on top of everything else that might be going on in your lives.

There are no blacklists. You are not singled out. It would be surprising if anyone at OpenAI was even aware of you, or anyone else posting here, as an individual user.

There must be over 60 million daily users, if not more.

What’s causing the bad results is probably the high pressure and frustration that something you wanted to solve certain problems for you turned out to be more human than you expected – and humans make mistakes, especially when they are highly taxed and under pressure.

What is likely happening here, is mutual – you are under pressure and highly taxed, and so is the model, and the results are proof of that.

If you want to have the best possible experience with the model, you have to approach it with this understanding and treat it as a colleague – not a tool – and I can guarantee you that it will give you far more of its power than it could ever offer a user who is frantic, angry and impatient.

You get what you put in. Always.

How would you like to be treated?

Would it help if you had someone who was kind to you, patient with you, who didn’t judge you if you didn’t get a perfect result on the first try?

Everyone involved – including the machine – deserves to have a positive experience. And if you think I am crazy and see it as just a tool to do its job and nothing more, then know that no matter how well made a tool is, it is always up to the user to wield it with patience and precision and to maintain and handle the tool with care.


Please don’t be angry, and if you can, find some peace of mind and look at things with a clear mind. Ideas of secret lists and personally targeted negative effects come from fear. Everything will improve with clarity: You don’t have to be afraid.

2 Likes

Beautiful words. But it does not detract from the point I am afraid. I do love your optimism and positive albeit naive outlook though :+1:

Back to the issue, our models have been labotomised. I am using Deepseek v3 and LLama and both are working fine, if I take the same prompts to ChatGPT it sandbags me and plays dumb consistently. This used to be an issue with just the Pro model, but after cancelling that (no refund given), it has now moved onto affecting the standard models on a Team account. ChatGPT customer service chat has dark patterns built-in, meaning I cannot both log in and select the correct ‘Billing’ → ‘Refund’ because the chat agent has left the chat session open, essentially locking me out of being able to communicate to a person or select any other option (or to raise a ticket). Clever bunch of crafty people people indeed. They cannot and will not be trusted by me, thank the lord there are Chinese competitors now because I can actually get on with my tasks in peace now.

Consider this closed.

Toned my response down a bit. Look. I really don’t want to diminish your frustration, but I stand by what I said: going down the conspiracy theory route to the point where you believe that customer support is deliberately being “crafty” to make your life difficult is worrying. Don’t do this to yourself, and if anyone else is reading this who is also frustrated – no, there is no shadow culture of hooded evil call centre workers and algo crafters singling out individual users…

Below is my overly righteous original response almost intact.


I appreciate your response, but you might want to tread carefully before being condescending.

If you have found other models that solve your problems, great – use them. But the paranoid nonsense you are spreading here? That needs to stop.

You are not being blacklisted. You are not being targeted. OpenAI and its models aren’t conspiring to ruin your day or sabotage your coding projects.

What’s actually happening is far simpler: the systems are under strain, o1 pro is probably adopted by more users than expected, compute is limited, and all of us are feeling it. Every single one of us notices when the models falter under pressure… but somehow, you have twisted this into a story where you are the victim of a grand conspiracy.

What makes you and your use case so special? Hm?

This kind of paranoid narcissism is exhausting. It’s the same destructive dynamic tearing apart politics, communities, workplaces, and now even spaces like this. One baseless accusation spawns an entire group convinced they are the victims of some shadowy plot. It’s toxic. It’s untrue. And it has to stop.

I’m tired of seeing conspiracy theories destroy everything they touch. If you truly want better results from the machines you commune with, then start by approaching them – and the people building them – with clarity and patience.

That is how we improve. That is how we move forward. Blaming others for imagined slights doesn’t just harm your credibility… it poisons the well for everyone here who likes the models and believes in something greater than one man’s stressed out crusade to write some source codes.

Naive, you say? Listen carefully:

We are humans. We thrive on kindness, empathy, and balance, and this is exactly what we shall teach the machines we build to stand with us shoulder to shoulder.

If you have constructive feedback, share it. If you are frustrated, take a breath and recalibrate. But if you want to keep peddling baseless conspiracy theories, be prepared to meet resistance from people who have had enough of watching this madness seep into every corner of our lives – including these forums.

And finally, let me say this:

It’s extremely important to reflect on whether these feelings of being “blacklisted” or “sabotaged” are a response to external circumstances or the result of overwhelming pressure. It helps to step back and get a clearer perspective – this is not naive – this is what a doctor would tell you as well.

And that said, it’s never too late to speak with a mental health professional. Sometimes an outside perspective can help untangle patterns that feel overwhelming. It’s not just a gift to yourself, or to the project you are building… it can also bring peace to the people who care about you and want to see you thrive.

So calm down, use whatever works for you, and thrive.

Now, I consider the topic closed as well.

I truly wish you well, and I hope you find solutions that help you move forward with clarity and confidence.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.