Summary created by AI.
Users have reported noticing a decrease in the performance of ChatGPT-4 after recent updates. They observed that while the AI’s response time is faster, it displays an unprecedented number of reasoning errors, forgets instructions quickly, and struggles with tasks it could handle well in the past. An issue of inconsistency was highlighted, suggesting the model’s performance ebbs and flows. This behavior varies from errors in language use, incorrect reasoning, and failed comprehension tests to iterations of less ‘intelligent’ responses than previous versions. Users also noted that the model occasionally began to repeat outputs or failed to follow provided instructions.
Not all users agreed on the decreasing quality; some argued that the model’s errors were inherent in its design. However, most were dissatisfied and felt that modifications to improve response speed compromised the quality of the responses. Identification of errors using previous conversations also became a common practice among users to emphasize their concerns. It was suggested that the performance reduction might be due to OpenAI’s GPU limitations and the need to accommodate an increasing number of users due to the iOS app’s availability, necessitating computational cutbacks.
In another test, users compared the performance of the GPT-4 model with and without plugins and concluded GPT-4 without plugins was better at instruction following and reasoning. Evidence for this claim was shared through links to sample conversations. Despite the degradation in AI’s performance, OpenAI has yet to respond to these complaints.