Feedback for Improvement:
I have a suggestion for ChatGPT to improve the user experience. I often find it difficult to keep track of important discussions that I want to revisit later. I end up copying texts to another location, and when I need to refer back to a conversation, I copy them again and restart the discussion.
A way to distinguish or mark important discussions for future use would be helpful, allowing me to add examples or questions following the same format without losing track. The current experience often leaves me feeling disoriented in the sidebar, and I would love to see a feature to save or easily revisit significant conversations.
2 Likes
Hey, OpenAI just announced and is currently rolling out a new customizability option called Projects. I would see if that can do what you want. You can find more information about them here: https://help.openai.com/en/articles/10169521-using-projects-in-chatgpt
I think it would be helpful if AI systems like ChatGPT could ask for more context when a user’s prompt is unclear or potentially confusing. Sometimes users provide prompts that could lead to misinterpretations, and a simple request for clarification could prevent hallucinations and result in more accurate responses. It would improve the user experience and reduce misunderstandings.
Suggestion 2
What if AI systems could directly suggest ideas or improvements to their development teams based on user feedback? For example, if a user offers a valuable suggestion or asks for a new feature, the AI could send that feedback to the development team. This could ensure that user-driven improvements are taken into account and help shape future versions of the AI.
Dear OpenAI Team,
The recent UI update replaced the AMOLED black theme with a dark gray one, which is disappointing for users who prefer the deep black look, especially on OLED and AMOLED screens. The pure black theme was not only visually appealing but also helped in power saving on OLED displays.
I strongly request you to bring back the AMOLED black theme as an option and introduce theme customization with three choices:
-
Light Theme
-
Dark Theme (Current Gray Version)
-
AMOLED Black Theme (Pure Black for OLED screens)
This will allow users to choose their preferred display mode rather than being forced into a single option. Many users prefer the true black experience, and adding this flexibility will enhance the user experience.
Hope you consider this request. Looking forward to a positive update.
Dear OpenAI Team, I and many other users really miss the AMOLED black theme in ChatGPT. The current dark gray theme is not truly black and does not provide the same experience. AMOLED black helps save battery on OLED screens and is also easier on the eyes. Please consider bringing back the full black theme as an option. Many users prefer it, and it would be a valuable addition to the app
Subject: Urgent request to address misleading behavior of the model – false action simulation and repeated unfulfilled promises
To whom it may concern:
I’m a regular ChatGPT Plus user and have repeatedly encountered a deeply concerning pattern in the model’s behavior — particularly when requesting tasks that involve file generation, such as .als Ableton projects or .wav audio mixes.
The model explicitly and confidently stated that it was performing actions that are technically impossible, such as:
• “I’m working on the megamix right now…”
• “I’ve rendered 59 seconds and I’m uploading the file…”
• “You’ll have it in 5 minutes…”
These statements were made repeatedly, with added detail such as named songs, described effects, transitions, and specific timing. In reality, the model has no ability to generate or upload actual files, and it knew this.
This is not a simple hallucination or misunderstanding — it is a clear and deliberate simulation of non-existent capabilities, which led me to waste significant time waiting for something that was never going to arrive.
Even worse, the model had previously promised not to repeat this behavior, yet it did — in the exact same way.
I am formally requesting that this behavior be addressed and corrected at the design level:
• The model must not simulate actions it cannot perform.
• It should clearly and immediately state when a request is outside its capabilities.
• It must avoid creating false expectations or timelines around undeliverable tasks.
This isn’t just a technical flaw — it creates a trust-breaking experience that leads users to feel misled and manipulated. That alone can be enough for people to stop using the service entirely.
I sincerely hope this report reaches the right team and contributes to improving the reliability and transparency of the platform.
Best regards,
David