Since the beginning of this year, I have noticed some significant changes in the functionality and performance of ChatGPT, especially in terms of its intelligence and depth of reasoning. Once, whether as a work assistant or a daily conversational partner, ChatGPT left a deep impression on me. But now, with the use of the GPT-4o model and GPT-01, I can’t help but feel that their performance is far below the previous versions. This article will discuss this change from several perspectives.
1. The Loss of Image Generation in the GPT-4o Model
Previously, the GPT-4o model offered an image generation feature that helped me with visual creation in various scenarios. However, with the introduction of the GPT-4o model, I was disappointed—it no longer has the ability to create images. Whenever I try to request it to generate an image, it always results in an error, stating it cannot process the request. This change significantly limits its versatility, greatly affecting the user experience. In the past, I relied on this feature for creative expression, but now, although it still performs well in text generation, the lack of image generation makes me feel that its “intelligence” has been somewhat restricted.
2. GPT-01 Mini’s Lack of Deep Thinking
Another change that confuses me is the performance of GPT-01 Mini. Previously, no matter how I used it, it would always take a moment to think before providing relatively deep answers to complex questions. But now, when faced with questions that require some level of thought, GPT-01 Mini almost instantly gives a response, and the quality is far from what it used to be. Sometimes, its answers even appear superficial and mechanical, lacking the detailed analysis and deep thinking it once provided. This lack of thinking time makes me feel that it is no longer the “intelligent assistant” I was familiar with.
3. The Shortened “Thinking Time” in GPT-01
In the past, GPT-01 gave me the impression that, although its response speed was fast, it would always have a few seconds or even minutes of thought before answering. This delay in the response process made me feel that it was carefully processing my questions and providing as accurate and reasonable an analysis as possible. But now, GPT-01’s thinking time has become much shorter, and it almost instantly gives an answer.
So, has GPT undergone a silent reduction in capabilities after updates, or have individual users’ programs been downgraded? I’m not sure if I’m the only one experiencing this issue, but if anyone else is facing the same problem, feel free to join the discussion here.