Is OpenAI actively lobotomizing GPT4?

OAI is doing something similar to what they do with ChatGPT-4 users, where they don’t benefit from us using more tokens. I think the issue isn’t about wanting us to pay more. I usually avoid jumping to “conspiracy theory” as my first explanation. So, if they’re doing something wrong, it’s likely because they believe it’s the right thing or due to unforeseen consequences. Therefore, deliberately complicating things to make people pay for more tokens seems unlikely since compute resources are limited; they would probably just raise the price per token instead, I guess.

I am not as impatient as I was at the beginning of February when I was loosing my mind because it was making me go over my 40 messages per ⅛ of a day. So do not think I can not relate to you because it is pretty similar in fact… I do remember I was pretty upset one day:

The Most Likely Hypothesis

I believe they’ve implemented measures to save money, like quantization or other tricks that are supposed to improve performance and reduce costs. My assumption is, they also want to ensure safety, so probably their tests looked good from their perspective. My guess is this focus on efficiency, which I’ll call quantization as someone who doesn’t fully grasp the concept (meaning trying to increase efficiency), combined with their priority on safety, might have led to what I’m calling Dissonant Synergy. I’m not completely sure how this played out, so these are just guesses, but I’m pretty convinced it’s something like that.

The Less Likely :smirk:

It might sound funny (even though I don’t think it logically makes sense), but others have mentioned before that this could be seasonal behavior :salt: (take this with a grain of salt). It’s interesting that this is a theory mentioned all over the internet.

I do think the competition from the other two models (one of which seems to have quite a few problems, to put it mildly) will make OAI think twice. I’d love to have access to a tool that’s less conservative about things that aren’t harmful.

Many issues I’ve complained about in the past have been resolved, so I guess I wasn’t the only one noticing problems. We need to be patient, but since OpenAI is somewhat opaque, they don’t seem to have a clear Iteration Plan.

I’ll say it again, they can keep as many things private as they wish (I’m not Elon Musk), but I always think there are many other areas where they could be more open, and maybe they’re just not great at communicating. I’m not sure.

I hope your situation improves soon. I’m not certain if our problems are exactly the same, but I believe they might have similar causes.

1 Like