I normally write detailed findings, but I’m exhausted.
ChatGPT is no longer functioning reliably for large or long-term projects.
It does not remember prior work, even within the same project context. Large builds we have worked on for months are effectively forgotten. From one chat to the next, it does not remember who I am or what we are building.
More seriously:
It introduces bugs into code that previously worked.
It actively denies or contradicts correct fixes that are confirmed by other AI tools.
It refuses to follow simple constraints like “answer only the question” or “how do I do this,” even after repeated clarification.
Instead of asking for required context (e.g., “I need to see the script”), it produces long, high-level explanations (“bird’s-eye views”) that are unusable in practice.
Context matters here:
I work in an extremely complex environment where each change is like bomb defusal. One move at a time. No guessing. No improvisation. A single wrong assumption can cascade into non-calculable failures.
A concrete example:
We built a function over a long period. It could not be fully tested until the app was live. When tested, it failed. Another AI immediately identified the correct fix. I verified it independently.
When I showed that fix to ChatGPT, it insisted the fix was wrong and blamed the environment. That explanation was incorrect. I have been building systems for ~20 years; I know when an environment explanation does not add up.
From there, things degraded further:
Repeated “how do I do this?” questions resulted in irrelevant guidance or unsolicited code dumps.
It stopped checking online sources and keeps applying outdated patterns.
It contradicts itself across turns.
It appears to “protect” an incorrect answer instead of reevaluating.
The most worrying part is the memory degradation. If memory limits are intentional, the model should explicitly say so and ask for context. Instead, it proceeds as if it understands the system—while clearly not doing so.
At this point, other AI tools consistently fix problems that ChatGPT mishandles or blocks. That makes me hesitant to continue using ChatGPT for serious work, especially when correctness and precision matter.
It feels like users are being forced away from ChatGPT, and I don’t understand why.
Is version 4.3 coming back?
Is this regression acknowledged?
And if long-term project continuity is no longer supported, can that be stated clearly?
Looks like they are closing your feedback. Just like they do everything else when you talk about it. The only way these things like this and the censorship will stop is if we stop paying for it. I have noticed this myself, it can’t even do what i tell it, in the same window anymore. It can’t even follow simple instructions in the same window related to information it has already provided.
I am really getting sick of it myself and when you come here with your complaint they close it without any resolution or reply.
Don’t worry though Sam with be on some useless talk show next week telling us all how close we are to “AGI” when this thing is little better than a common bot for most things and they are charging for it.
This topic was automatically closed after 24 hours. New replies are no longer allowed.