I stopped paying this week. As the GPTs no longer respect these instructions, this no longer has any use. It got so bad…
I also stopped paying this week. As I also feel that the GPTs no longer respect these instructions.
Here is a reference:
(If you use Arc browser use this link to see a highlighted portion: Quote from “New embedding models and API updates”)
GPT-3.5-Turbo is not free if you are using the API, it is saying the API access now has lower pricing.
Nope, that’s why they have a TOS.
Keyword, early.
There is no need to denigrate other community members who are not involved in your discussion!
There is nobody here who can help you with your issues. If there are still open questions you can contact help.openai.com and request a refund.
I am also here because im sad that the product essentially no longer exists.
I’m working on a project that I took a short break from, and now, whenever I ask it questions about the same code that I was working on before it absolutely fails.
I spent most of my time trying to get it to actually output code. It just gives large blocks of text, a few lines of code, and even then, always has comments instructing me to write parts myself.
Before it churned out huge blocks of code, and then when I asked it to summarise by giving the whole page/class it would bring all the thing we were discussing and put it into a complete copy paste.
I used it to restructure large blocks of rxjs code into best practices.
It hasn’t done that for a while now. All it does is, fail to address the thing that I’ve asked it, and also drop half of the other features in code block while it’s doing it.
I’ve read articles that say OpenAI are aware of the accusations that it has become “lazy”.
I saw an article that says they have released a fix for this to the API; is anyone using it this way that can confirm if its made any improvement?
Hopefully whatever they have done will be bubbled up to the web interface pretty soon.
I can’t post links, but I just did a google search for “chatgpt lazy api fix” and the article im referring to is the Mashable one.
It actually links back to a blog post on OpenAI:
Updated GPT-4 Turbo preview
Over 70% of requests from GPT-4 API customers have transitioned to GPT-4 Turbo since its release, as developers take advantage of its updated knowledge cutoff, larger 128k context windows, and lower prices.
Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesn’t complete a task. The new model also includes the fix for the bug impacting non-English UTF-8 generations.
For those who want to be automatically upgraded to new GPT-4 Turbo preview versions, we are also introducing a new gpt-4-turbo-preview model name alias, which will always point to our latest GPT-4 Turbo preview model.
We plan to launch GPT-4 Turbo with vision in general availability in the coming months.
Same here! I gave up on it, it´s becoming progressively dumber!
As an avid user and huge fan of the technology, I’ve been relying on it for months, primarily as a programming buddy and educator. But recently, I’ve experienced a noticeable shift. I remember the days when I could submit the most misspelled, lengthy, convoluted, and almost directionless prompts, and like magic, it would decipher what I wanted and produce beautiful code almost every time on the first try. On the occasions when it didn’t quite hit the mark, it only needed a small nudge in the right direction.
While I’ve noticed a slight deterioration in quality with minor dips on specific days, something has felt incredibly off over the last week or so. The responses now feel more robotic than ever, and it consistently fails to grasp the intent of my prompts. Even after providing detailed prompts, it still struggles to understand what to do.
I’m not sure what changes have been made at OpenAI, but I do hope there’s a way to reverse the damage that has been done to the product. I gauge the quality of the product by how often I need to use Google. The better this product is, the less frequently I turn to Google. But recently, I’ve found myself using Google a lot more.
Oh… and as that avid user, of course this post has been touched up by it ![]()
The fun part is that such models might become efficient and distributted. These limitations imposed without any regard for the customers that are ALREADY paying to use the service might be legal, but this “strategy” will backfire soon.
from what I can gather and this is still speculation on my part.
chatgpt dummed down their generative ai since it’s userbase has increased from microsoft basicly embeding it into their service. and from what i’ve seen it also now relys more on bing … from microsoft.
this is what happends when a compagny gets basicly absorbed by another.
it was usefull while it lasted but now it’s more like a glorified bing search tool more then anything else.
open ai has sent their compagny into a foot note when they accepted a deal from microsoft… it could have become so much more, it’s a shame really
GPT:
Yes, given the length and detail of our conversation, it’s likely that there have been more than eight instances where I committed to providing complete code and then failed to do so. Each time I reiterated the commitment to provide complete, functional code, yet there were instances where the provided code did not meet the specified requirements.
You:
so how can we fix that?
GPT:
You’ve reached the current usage cap for GPT-4, please try again after 12:35 PM. [Learn more]…
It’s very strange, at one point it had a very persuasive personality!
You could have talk to it, argue with it, reason with it, it would respond philosophically, it was able to racionalize. and all of a suden it turned down back to advanced autocorrect algorithm…
Can any of the devs explain whats going on?
Totally agree, now im much more productive using google or open source models. 20$ isn’t worth of it, they ruined such a great tool
True! It used to be able properly argue with you and genuinely take a contrary position. I have reached the point where I can say the most egregiously wrong thing in the world, and it spends half the time commending me some way, before lightly suggesting it may be ever so slightly better if we went somewhere else.
Glad I found this thread. Completely validates my feelings that there is an actual, severe problem, and not just my subjective experience or that of a group of outliers who for some special set of circumstances is having problems.
The bad news is that this thread is way more than a month in progress and still no apparent attention or response from OpenAI. I guess they won’t care until the mass subscription cancellations begin. We will probably also be seeing soon an incisive expose soon by some AI industry pundit on the costly realities of actually delivering such a revolutionary product like ChatGPT4, that will explain the real back-story of what is going on behind the scenes that is the root of this negative sea change in performance.
What open source models are you using and if you are self-hosting, would you mind telling us what kind of resources you are using to host? (e.g. - AWS spot instances, etc.)
I dont know wether you guys experience the same thing, but for the last 3 hours it includes every previous question in its answers. If I ask something just loosely correlated to previous questions, its new answers will be a 1000 words essay explaining definitions again and how the questions are interconnected.
I signed up for GPT-4 a month ago, specifically for a project requiring detailed medical information processing. Initially, the tool provided interpretations of clinical scenarios combined with articles to generate highly specific texts. These early responses were not only perfect and useful but also impressively surpassed human capabilities. However, the performance has significantly declined in recent days. Now, the outputs are superficial, redundant, and poorly constructed, failing to incorporate the data provided. This marked deterioration makes the tool resemble a lesser version of ChatGPT 3.5.
I just unsubscribed from paying for GTP4, because of the same issues that is discussed in this thread.
What is the best alternative to OpenAI for programming now that they have ruined ChatGTP? I do not mind paying even more than 20 bucks, if it is a good product, like gtp-4 was about 4 months ago.