Same here, i’m realy about to cancel my chatgpt subscription if nothing change. I worked on many complex prompt in the last 6 month and that was working great, i was impressed and telling mysellf wow, if this become better and better, i will be able to do what i want with my project. Now, hallucinations are far more frequent. It cant even recall informations and directives some output or input before. Also, it states its limitations more often and transfer the task to you instead of doing it, it becomes very frustrating, it was able to do the task well some weeks ago.
Saving ressource for more users ? Restriction capabilities ? Or just bad collateral effects of other changes ?
The error indicates that there is a problem with the conversion of your data types. The model training function fit() requires numeric input, but it seems one of your columns contains a string.
This error is being caused by the column containing ‘PCC=************4021’. It appears that column has not been correctly transformed into numerical values.
that does sort of sound consistent with them lowering the number of tokens the history can deal with forcing them to be more aggressive in summarising the previous chats doesn’t it. I guess that’s the difficulty of a black box type system where you don’t really know fully what’s happening in the background. Not complaining here though, just an interesting discussion.
I’ve been following this thread closely and have found that many of the concerns expressed resonate with my own experience. Like many of you, I have also noticed a significant decline in the performance of GPT-4 compared to previous versions.
One issue that particularly stands out for me is GPT-4’s recent struggles to maintain a consistent tone and style as dictated by the instructions in the prompt. This is something I hadn’t encountered with past versions to the same degree.
It’s not just about repeating outputs or losing track of the context in longer conversations, which are indeed frustrating issues. The failure to maintain a consistent tone and style impacts the coherence and quality of the output. In the past, when I input a prompt with specific instructions regarding tone and style, I could trust the model to adhere to these parameters reasonably well. Now, however, it seems like GPT-4 sometimes ignores these instructions, leading to outputs that don’t match the intended tone or style.
This decline in performance is concerning, especially for those of us who rely on GPT for professional or creative work. If these issues persist, it’s going to significantly affect the utility of the tool. We need to see improvements, and it would be helpful to have more transparency from OpenAI about what’s causing these problems and what they’re doing to fix them.
The best thing for everyone to do is unsubscribe until they fix it. When the revenue drops enough for a month or two you can guarentee they will prioritise the issue. I unsubscribed a few weeks ago. Join the exodus.
So far, every single person I have talked with who had a problem with the new GPT models, either change a setting, used too much text or misconfigured a system prompt.
If your case is different I’m happy to help if you can share your API calling code and the prompts/output that don’t perform as you expect.
Thank you for your interest in this topic. Specifically, this thread is about ChatGPT and not the API. Regarding the sharing of prompts, there have been many in this discussion. Unfortunately, I’ve noticed that most of the shared links have expired. You’re probably right; it seems to be a collective hallucination coupled with a clear lack of skills on the part of every single individual who noted things here, and I include myself in that.
edit : Take it with humor, nothing serious on this subject. It seems that the origin of the training data is much more serious.
I don’t think it’s hallucinations at all, I think it’s a combination of people who have grown comfortable with the AI trying ever more complex things with it and hitting limitations and also some sensitivity to slight variation when using established norms.
The ChatGPT experience will alter over time as various alterations and adjustments are made to both the guiderails and the load balancing.
The finite GPU compute resources are being shared across hundreds of millions of users by a team of less than 500 people, sometimes that can result in reduced context allocation and even in the system message being altered to allow more people to try out the AI for essentially free. The $20 a month fee for Plus covers perhaps a day or two of casual GPT4 usage so it certainly is not a super profitable venture at this stage.
Everyone wants to try the latest AI in town and some are unhappy when the beta system has beta changes made to it. I understand the annoyance of things changing from when there were a few thousand users to when there are now a few hundred million and hopefully compute will get faster and more abundant and everyone can have super long context and more time with the AI.
Back in March it understood what i wanted and gave me the right solution. now it doesn’t understand the question and its only solution is always to tell me to find a solution.
My mind automatically begins to construct a conspiracy theory which is not conducive to my own emotional wellbeing.
The plugins have been underwhelming and barely work as expected. 9 out of 10 times you have to pay for those on top
I am, as of this day cancelling my ChatGPT subscription and I urge everyone reading this to do likewise.
I may be back when things improve, or if I find another solution I may never be back.
This whole debacle has left a very bitter taste in my mouth
Dear Foxabilo,
From your preparation and the quality of your answers, I understand that you are part of the OpenAi organization. If it’s so much better, otherwise it’s okay!
I point out that in my organization, for months we have been logging every call to OpenAI in the various versions of the models you released.
I can not send you the whole log here, but I can assure you that with the new version of the model something severe happened.
In any case, I thank you for your assistance (you have also answered me in other chats in a very professional way) and I am at your disposal to be contacted to give you absolute proof of what I am saying.
I really hope that the gpt-4-03-14 version will not be deprecated: the software we are developing for months and that we have based on GPT-4 will stop working correctly.
Feel free to enter in contact with me (if you can).
Thankyou for the kind words, I do not work for OpenAI.
I’d be happy to work with you to find a solution to your issues, we can start a new thread if you wish so that others may benefit, but you can Message me if some of the data is proprietary and you wish to setup some form of NDA and then we can discus details in private.
I’m glad you found some value in my efforts and I hope the issues you have can be resolved.
I used GPT 3 intensely trying different ways to generate source code, and I had great results. When GPT 4 first came out, the results where even better.
the prompts included instructions so that the responses had nothing but source code. But now I keep getting polite commends around a markdown block, even with GPT 4.
It might be the waning off of the euphoria triggered when ChatGPT came out, but it also could be the case that performance has regressed somehow.
Either way, this risk definitely disincentives me to put more effort try new ideas commercially.
Glad I’m not the only one who noticed this. And, with the same piece of code, it always seems to do it at the same place. I’ve taken to just having it print half, then the other half.
Could you elaborate on this a little? I hope to move to production soon, but gpt-3.5-turbo-16k (the only affordable option right now) is giving me the blues. How does Azure differ?
No, you’re noticing the same thing that many people are. Unless it’s mass delusion, there is a problem. I as actually admonished in this community for feeding too much code into GPT3, but it was so easy to have it churn out complete scripts to do a variety of functions. The worst that would happen is that it would freeze up on me when the load got too heavy. So, I pulled back and started just having it work with small functions. GPT-3.5-turbo is now completely useless. GPT-4 Browsing somewhat less so. Only having good success now with GPT-4 codex.
I get alot of timeouts on azure.
Was hoping to reach like 16 seconds per use case in my application. But 27 minutes is more like it in my real world szenario.
When you get alot of data it really hurts. Gonna try different sockets and use as many model deployments as possible.