OpenAI Ships like no other

I’ve read many comments suggesting that ChatGPT is becoming “dumber” or its responses are curtailed somehow. I have not experienced this, maybe due to the way that I prompt or interact with ChatGPT, idk.

It’s hard for me to believe that it’s been less than a year since GPT-4-enabled ChatGPT was released. To me it seems that both the UI and the backend are improving at an unprecedented pace, month-over-month, which speaks volumes about the skills and dedication of the teams at OpenAI.

I often wonder, and it’s not often discussed, but alongside their evident talent, the team at OpenAI is undoubtedly utilizing the latest unpublished models in every aspect of their business, including engineering. It must be incredibly exciting and fulfilling.

Developers outside of OpenAI, using the latest public version of ChatGPT, are experiencing enhanced output. It’s very likely that OpenAI has been utilizing far more advanced models internally for quite some time, such as GPT-5, GPT-6, and various UI enhancements.

Imagine having the opportunity to consult ‘Big Brain’ at your day job, a resource that no other organization has access to. I don’t know what that looks like, I’m just on the outside looking in. But I would love to be a fly on the wall when that happens.


It comes down to negativity bias.

(Extra words)

I can attest that there were certain instances when “gpt-4” became certifiably “dumber”. However overall, it seems like progress. But I still think the complaints are valid, but the reason might be unexpected: operator incompatibility? Some people with a certain specific temperament seem to be getting left behind. It’s possible that this might get worse as the models get better, although there could be an easy fix.

I think you already do. You might overestimate the proportion of organizations that have access to this technology; There are multiple megacorporations that currently forbid their employees from using it.

We’re living in that time period where it’s possible for a couple of guys to build some of the highest tech on the planet in their garage again, while the giants sleep. Let’s make something of it :smiley:


chatGPT started to respond according to its wishes. does not do what is given. It connects to Bing pointlessly.

Responses are definitely curtailed whenever possible and this is new to the last several months of using GPT Plus. Speed of response can be pretty slow. But I suspect both these things are due to congestion and trying to deal with it without just blowing a lot of money on the problem. OpenAI is a money-making enterprise, or hopes to be, so I see this all as normal growth stuff.

I think anyone that says that the curtailed responses or slowness are making it “unusable” or that they will use some other LLM that is “better” or even “good enough”, are mostly dealing in hyperbole. GPT-4 is still head and shoulders above near peers. There is healthy competition in this new market.

I am very glad OpenAI offers the Plus service to the public at a fair price, it’s a great deal!