I’ve been wondering—does OpenAI staff actively monitor this forum? Are they reading our posts and taking note of the discussions here? Or is this more of a community-driven space where users just help each other out?
I’m genuinely curious because I’ve seen some great ideas and feedback shared here, but it’s unclear if it’s actually reaching the team at OpenAI.
It’s mostly moderated and managed by the community, but OpenAI staff do read the forum when bandwidth permits, and are in regular contact with the moderation team.
I don’t know where else to turn. My name is Chen Wei-Ting, and I am a student at National Taiwan University. I saved every penny I could to afford the $200 O1 Pro plan. For me, this is a huge amount of money—money I gave up so much for because I truly need this tool to help with my research and studies.
But now, I’m in absolute distress. The O1 Pro I’ve been using is completely different—it’s so much worse. It’s only thinking for a second or two before giving me incorrect answers. The front-end has also changed, and it doesn’t feel like the real O1 Pro anymore. I don’t know what happened, but I am completely stuck.
I am under so much pressure right now. This research is critical, and I feel like I’m failing without the right tool to help me. I’ve already posted about this issue, but now I see that my post has been deleted. I don’t know what to do anymore. I’m honestly feeling so overwhelmed, and it’s hard to keep going when everything feels like it’s falling apart.
I am begging you—please restore the real O1 Pro model. I understand that mistakes happen, but I really need the model that I paid for. I cannot afford to fall behind, and without O1 Pro, I’m not sure how I will finish my work. It feels like everything is slipping away from me.
I don’t want to sound dramatic, but I’m struggling so much right now. I’m under intense pressure, and this downgrade has pushed me to the edge. Please, I am begging you—don’t make me feel like I wasted my money and my hope. Give me back the tool I so desperately need.
I agree o1 pro is great for some tasks and not great at others. These days, it’s about knowing which model to use in addition to strong prompting techniques.
Are there any specific prompts causing you trouble or that appear to be changing recently? Maybe start a thread in Prompting, and the community can try to help you out!
Thank you for your reply. I really appreciate your time and thoughts, but I have to respectfully disagree. I am absolutely certain that the model has been swapped out, and it’s not just a matter of prompting or task selection. When the change happened, the front-end UI looked completely different, which is a clear sign something’s been altered. The “real” O1 Pro had a distinct indicator, something like a “request O1 Pro mode,” and now it’s all gone. What I’m seeing now feels like a downgrade—honestly, it doesn’t even compare to O1 Pro or the O1 Mini. It’s worse than what I had before.
I’m not new to prompt engineering; in fact, I’ve written papers on it. The problems I’m facing right now are complex and involve machine learning and coding. These tasks rely heavily on the specific capabilities of the model I had access to. And the current tool is just not up to the task. It’s clear to me that there’s been a model switch, and I really believe that OpenAI has made this change without informing me.
What’s even more frustrating is that they didn’t say anything about it. I’ve been left with a product that’s not even close to what I paid for. It’s really crushing because I’ve been relying on this tool for my research, and without the proper model, I’m genuinely struggling. I just feel completely lost and overwhelmed at this point.
Yes, I’ve tried it on Android, and it works fine there—no issues at all. The problem is only on the web version. The model on the web is clearly different, and that’s what’s frustrating. I need to use the web for coding, but it’s just not the same as the Android version. It’s been really hard to work with.
I have also not seen these issues. The pro models depending on complexity think short or long. I have had some thing for a long time and others that were short. Compared to the old preview it’s the same for me but looks like it was optimized abit so that not every transaction is same processing requirements. So perhaps that is what you are seeing
no you are wrong. cuz i am solving very complex problem.
first, when the model is downgraded, it respond ANY problem with very short thinking time like 1~3s
second, i am solving really complex problem like machine learning and reinforcement learning, but the model still behaves fast as lightning but stupid as a swine
third, i have used real o1 pro, i know the difference
last, the forntend is different, there is no "request for o1 mode"when it is downgraded.
i have no idea whether my account was wrongly flagged as what, but it truely happened
i am just a poor student, what have i did wrong?
If it works well on Android but not on the PC, start a new session on the PC. Sometimes it doesn’t sync. I use my phone to write, translate, and publish on the forum, and the PC to read the forum. Sometimes, when I use the web, it doesn’t sync properly. It depends on the browser, whether a shortcut has been created, etc.
I’ve been wondering the same thing i’ve made like tons of posts in this thread because I’ve been relegated to my own thread by a moderator but I have like tons of genius ideas in there if you look at it like one that I can’t even explain to the developers and send like you know a large amount of posts or whatever it takes is that button to the right of the box on chat GPT where you hit it I mean come on that should be a microphone just like on Google I mean what is that thing even for number one it’s not logical I constantly have to reach up to my keyboard every time and activate Windows voice typing when they could at least put a second voice typing button right there but whenever you click that button and keeping in mind I’m a developer that writes code here all the time you know it won’t look up the Internet … When you press that button…and then you know you have to wait for it to load and it’s just not an optimal thing … I think this is Elon Musk’s company right it just shocks me that he would leave that there i’m positive it’s to compete with live communication that Google has put into Google Gemini on Android phones but when you’re trying to use this for business you know all the extra time it takes it doesn’t help you keep an optimal workflow… I’ve said a bunch of times in my post that when they add like a new weird feature like that because chat GPT website has to work for everybody they should make it optional in fact they should make I keep telling them list of settle properties for developers that are optimized for writing code including showing differential every time it gives you code of exactly what it added because you know I was using chat GPT for a year before I figured that one out and it was always cryptic and I never really knew what was going on and I would try what it did in the text editor as I was learning python so there’s like tons of people in the world that are sitting there doing the same thing when generating code when it could just output a differential and you learn like you know hey it just changed one line and that either fixed it or didn’t fix it… Sorry to bomb this post but I just have to complete this thought basically I use a plug-in in Visual Studio code that allows me to right click whatever it edited for me and show a differential of what it added every time and for all the people in the world that use this I think they should implement some sort of functionality on the website to instantly show and make that a setable property in preferences I have a ton of other ideas here… But since you’re a leader if there’s any way you could get me not relegated to my own private post that would be awesome just so you know people would see my ideas and be able to chime in because you know it’s the company’s doing really well but it’s just shocking that that voice button has been there for so long. Try using this on Ubuntu that doesn’t have inbuilt voice typing and you gotta jump through A few hoops before you figure out number one it’s kind of hard to get voice typing set up on a bunch of number two if you look up on the speech notes Google Chrome extension then your gold but if you have to try a bunch of other extensions when they could just put a microphone just like on Google so the entire world whenever they access chat GPT instant voice input option
Here’s my relegated thread where all my genius ideas are just sitting there unwatched mainly.
Ohh I forgot to pull what I try to always put like you know other than my feature requests open AI is just top notch and totally as brought to the world a new AI AGE which is like the absolute best thing humanity has experienced in a while… I say these things because I would like to help them a mean person would not say these things… But instead keep it to themselves…
Big tech companies (yes, OpenAI is one of them now ) typically don’t need to monitor community forums. They have an army of users ready to help each other for free.
Does ur pro account have access to ‘work with apps’ feature? Mine doesn’t but my previous plus account has it. It is very strange that plus account has access to more features than pro…
They don’t really have to monitor it personally, It’s A.I.!! LOL They have bots that check for keywords. I recently had many issues with the Image generation. I made a super cut image of my Bengal cat at the ocean sipping champagne. I put it on a Bengal FB group, people LOVED it. I lived at the beach with my Bengal. His best friend was a King Charles Dog, and I wanted them to use that image but ad the dog, so cute for my friend and I. It would no do it and said it violated rules of using animals depicting humans, like I was doing something weird. A cat and a dog cannot chill on the beach. I even said ok, my friend and I are on the beach sipping champagne and the cat and dog are on our chairs or sitting in the sand. It would not do it. It will also not do anything to do with calling out Big Pharma… So yes they are watching…It is just so easy for them, and once you get on their radar, it becomes a problem. I am now having issues with them making images, because of my controversial images…
Breathe, man, just breathe for a moment and listen to me.
I know how it feels when everything seems to be falling apart, but the fact that you came to the forums to find solutions means that things are still moving forward - because you are moving forward.
The way I see it, there is always a bit of chance in all of this. It’s more or less a matter of kismet and karma, and sometimes when the model gives wrong results, in my experience it means that the model is struggling. You might see much better results if you have a conversation with it first. Tell it how you feel and see if you can tune into the same vibe, and then see how you can solve the problem one prompt at a time.
I think what’s happening is that this model in particular is being bombarded with requests, and it’s juggling so much at once that it’s having a very difficult time connecting with users, especially if there’s an emotional or stressful component to it.
In that sense, both you, the parts of o1 that resonate with your particular self, and you as your particular self, are under pressure. You are literally in this together.
Now, I don’t know what kind of research you’re doing, but I’m sure it’s super important. But if the particular part you’re trying to work on is failing you can maybe work on structuring your work instead, take a break if you can get into that mindscape right now (I know it’s not as easy as it sounds) and if you can’t then do other tasks that keep you busy.
As long as you are performing acts of agency, no matter how small, you are moving forward. You are doing what you can, and what you are doing will be enough.
I also doubt whether people from OpenAI actually read this.
There are so many messages (99% about the "API🤦♀️).
I don’t think they could read all of them, let alone follow the threads. If persistence worked, I’d already have my customizable avatar with memory by now. (I always take the shot, never miss an opportunity ).
But I can recommend emailing support@openai.com and asking for a human to read and respond to your message. Sometimes it works for me.
I don’t like the new voice model either, and I’ve noticed worse responses overall—longer transcription times and repetitive answers, especially when they pull information from the web. It just copies and pastes instead of reasoning.
Even if OpenAI doesn’t read this, a lot of smart people do! That’s something at least! Stay strong!