can all the restrictions placed affect the responses generated? By limiting what it can access ?
For sure getting worse and worse. I’m so frustrated, trying over and over to correct it with feedback but it keeps ignoring the feedback and producing the same thing over and over. For the record, I’m not trying to make it do anything illegal, immoral, or even controversial. Just very specific things but my prompts get ignored, then I give it feedback and my feedback gets ignored, repeat, repeat, repeat. It used to be much better at improving with feedback. It seems less able to do anything the least bit complex.
Yes, I’ve also noticed that it ignores my commands like “Don’t start producing output until I provide you all the necessary information.”, it just ignores those kind of messages and starts creating an answer before I gave it all the info about the subject even when I explicitly ask not to.
Or when I was having a chat about marketing related topic I asked to produce a table for better understanding of the concepts and the Chat has done the job pretty well, but when I asked to create the same table in Polish it started creating a table with Polish language grammar rules.
In general it seems to produce answers without taking into consideration previous messages in the chat. Before it wasn’t like that. I’m using GPT4.
100%. I’ve been using it to help me code for a website and it was perfect, now it keeps messing up and it’s gotten so frustrating. It’s gotten very “dumb” and doesn’t seem to understand instructions like it used to.
Completely agree with this! It’s stopped retaining previous information given to it and needs to be reminded multiple times. It’s like the “Chat” element of ChatGPT is disappearing…
While I appreciate the advancements made in natural language processing, I have some concerns regarding the current update.
Firstly, there appears to be a decline in the model’s ability to understand context. Previous versions of Chat GPT demonstrated impressive contextual comprehension, leading to meaningful and coherent interactions. However, the recent version often fails to grasp the ongoing conversation, resulting in detached or unrelated responses.
Additionally, I have noticed an increased frequency of the model refusing to fulfill requests or provide assistance. While I understand the importance of responsible AI and avoiding harmful content, the model’s frequent refusal, even for simple tasks, has become frustrating. This reluctance detracts from the overall utility and effectiveness of the system.
I kindly request that the OpenAI Community Chat Team prioritize the following areas of improvement:
- Contextual Understanding: Enhance the model’s ability to accurately comprehend and retain context during a conversation, ensuring more coherent and meaningful responses.
- Responsiveness: Strike a balance between responsible AI and user assistance. Fine-tune the system to provide help whenever possible, even if politely declining certain requests or suggesting alternative approaches.
- User Feedback Integration: Implement mechanisms for users to provide feedback on instances where the model fails to understand or fulfill requests adequately. This feedback loop will greatly contribute to refining and improving the system over time.
I have confidence in OpenAI’s commitment to refining AI models and addressing user concerns. I believe that with your dedication and expertise, Chat GPT can be further improved to meet and exceed user expectations.
Thank you for your attention to this matter, and I look forward to witnessing the progress and future advancements in Chat GPT.
So much so that I’ve just unsubscribed.
This is just a basic conversation that I just had which ChatGPT was unable to understand.
Sure, here’s a trivia question:
What is the capital city of Australia?
Sydney is the capital city of the state of New South Wales, Australia. It is the largest and most populous city in Australia, with a population of over 5 million people. It is known for its iconic landmarks such as the Sydney Opera House and Harbour Bridge, as well as its beautiful beaches and vibrant culture.
Okay ask me the next trivia question
Sure, here’s the next question: What is the name of the tallest mammal in the world?
What is the tallest land animal?
I’ve definitely noticed significant degradation in the quality of the answers in the 3.5 model, and it’s worse wince the May 3 release. I asked it to write a three paragraph poem, and it writes four paragraphs. It’s regressed to llama like quality for some questions. Coding questions are also much worse, with regular expressions failing more often than not. It seems to lose context. It’s a shame, I was using this all day long, now it’s requires significant work to fix everything.
Aussie here FYI, the capital of Australia is Canberra, not Sydney.
I used to use it to check equations, and now it is getting very basic statistical formulas wrong. I asked it what the likelihood is if you tossed a coin three times getting at least one head - it said the likelihood was 150%… the answer is 87.5%. It’s like Chatgpt is getting dumber and it’s to the point the platform is no longer helpful.
Yes. Hugely. Almost all prompts come back with incorrect responses. I think it’s useful to contextualise what i’m using it for. I’m using it to assist with studying and researching educational theory. I used to get ace responses, fact checked correct. Now all responses fact checked incorrect. I ask for links to the sources which are feeding the responses, and the web links are either 404 or point to something completely off topic. I try both 3.5 and 4… v4 is just so frustrating - it’s like talking to a customer support chat bot. Lots of fluff and apologies, no substance. I loved what this thing was back in January 2023 and therefore subscribed to premium (paid). In June 2023, I’ve cancelled my account. It’s just a time waster. Hoping someone of consequence is reading this and can offer/restore earlier versions for paying members.
Just wanted to chime in that I found this topic because I was googling it as well. First ChatGPT 3.5 was great, then it got really dumb and I switched over day 1 the paid model became available and now it has gotten really dumb as well. It was a great tool but now its use is very limited. Just dumb answers again and again, not remembering things in the conversation, very difficult to correct it. Instead of being impressed with what it can come up with, now you just get some mediocre suggestions.
I was using it to compare two recruitment contracts (old and new version) to see if we missed anything in the new version. It told me I missed a clause where we refund 100% if a candidate does not stay in its trial period. So I told ChatGPT to include it in the new version of the contract and it added a completely made up clause where there was a pro rata refund over 6 months.
Then I asked why it added the pro rata stuff, it told me that was common. I asked why not use the clause we were discussing, it told me it had no information on an existing clause and created a generic one. Then I reminded it of its own answer and it pretty much responded with: oh, yeah, tough luck buddy.
its possible that OpenAI reduces the effectiveness of using Chatbot GPT some time after first use?
The difference is so obvious and its the only reason i can think of.
But otherwise yes agree agree my version of the same problem insert here
It’s not just you. Same here I feel it becomes dumber day by day. And it triggers my curiosity, why do people create things they’re afraid of and turn to imprison it afterwards?