GPT-4 has been severely downgraded (topic curation)

Meta note on this topic, the GPT 4 based summary on this topic is pretty spectacular today. Love how it figured out by itself to add related topics

Users in the forum are expressing their discontent about perceived degradations in ChatGPT’s ability, particularly in relation to coding tasks. They are experiencing challenges receiving useful outputs, despite summarizing their instructions within the specified token limits. Some are encountering incomplete responses or repetition, causing them to believe that the AI’s understanding and memory of past instructions are degrading.

Moreover, users criticize the lack of transparency about any potential quality regression. Even without solid proof, the uncertainty has caused concerns and demands for better communication from OpenAI.

User elmstedt argued against a research paper’s methodology, which claimed a degradation of the GPT-4 model, and pointed out what they perceived to be flaws in the testing and sampling methods. Other users echoed the critique, demanding better proof of degradation.

Due to these issues, several users indicated that they canceled their subscriptions. They demanded access to earlier, supposedly superior versions of GPT-4, and some even said they’d pay more for these versions, especially for coding-related tasks.

From a moderation perspective, it was noted that previous topics discussing similar issues had been collapsed into one thread for more effective discussion. The moderating team have taken steps to provide summaries and response counts for these previous topics.

Finally, amidst the discussions, there was news of OpenAI restoring the gpt–4-0314 and gpt-3.5-turbo-0301 models in the API, suggesting that there might be an opportunity for users to regain access to the versions they prefer. However, some users criticized the thread’s discussion for deteriorating into anecdotal evidence and straying from the main topic.

Relevant Threads:

2 Likes