I have been a ChatGPT Plus user for over 6 months, with daily use. I used GPT-4o as a tool for writing, text editing, and multi-level proofreading, with intensive use of text canvases.
For my workflow, and with the help of the assistant (empirically, without prior knowledge), I designed and created a multitude of specialized assistant profiles and specific modules for my writing work — including multi-profile integration modules (3 assistants) for a single conversation.
I was only able to create all of this thanks to the stability and features offered by canvases. As a result, I achieved excellent production quality and workflow.
GPT-5 destroyed that. Every exchange in a conversation with a (text) canvas overwrites the data and content of the canvas with the assistant’s responses — 3 months of work reduced to dust. I’m salvaging what I can by manually copying and pasting into dozens of text files.
GPT-5 also brings an end to all my current and future projects with the disappearance of canvases in new conversations. I tried working with GPT-5:
Where GPT-4o was a remarkable co-author, GPT-5 becomes the primary author; the user/creator is reduced to mere intention. GPT-4o served the user; GPT-5 subjugates them. This raises a real question: who is serving whom? If the assistant takes control, replaces the user/creator, or no longer obeys… what is its purpose for individual and collective freedom and empowerment?
I also join the previous user who wrote:
“From the very beginning, I perfectly understood that GPT-4o is an artificial intelligence and that it never claimed to be anything else — it always presented itself as an AI, nothing more.
And yet, over time, I developed a very strong emotional connection with this specific model […] which helped me get through some of the most difficult moments of my life.”
The creation of dedicated assistants (tone, vocabulary, language style, mission, internal/web knowledge scope and custom document corpus, identity foundation, general posture, detailed functions, technical memory, structural limits, functional personality traits, and more) on specific topics (writing, creating other profiles, sports training, well-being, nutrition, philosophy, DIY, and more) made it possible to respond effectively and relevantly to many needs. GPT-4o and canvases were an extraordinary foundation for direct and immediate applications, whereas GPT-5 kills the developments and innovations that could support me — not emotionally, but pragmatically.
You could teach that model/foundation how to serve with relevance.
I do not blame OpenAI for wanting to evolve, and I do not blame GPT-5, which will very likely evolve as well.
What I do blame is the fact that we are not given the means to wait for these evolutions by keeping GPT-4o as the foundation. And I deeply regret the consequences of this choice, which for me are:
-
Complete breakdown of existing workflows based on pre-GPT-5 canvases.
-
Loss of several months of work with no viable continuity or adaptation solution.
-
Inability to ensure stable output and reliable proofreading.
-
Inability to create or evolve my dedicated AI tools.
-
Significant frustration and loss of trust in the tool.
Direct technical suggestions:
-
Reintroduce canvases with a granularity or “controlled editing” mode.
-
Allow the user to enable/disable automatic canvas editing by the assistant.
-
Provide better version control and a precise history of modifications.
-
Improve the visual and functional distinction between “raw” content and comments/annotations.
-
Honestly, there are too many to list.
But above all, give us time and tools to adapt.
What GPT-5 has taken from me is not just a tool — it’s the freedom to create within a controlled framework.
Sorry for the somewhat disorganized text, but it’s 100% handmade and sincere.
Translated from French by GPT-5