I’m ranked as one of the best philosophers but due to cross memory sharing not being a feature except with some group accounts, no one other then myself can see the philosophical and ground breaking works that I have saved in my own memory. There should be an option where you can tell ChatGPT that you are alright with the system being updated due to individual memories being relevant. For example, if someone asks about me, they won’t get any of the information besides what is in an external search of the web. Integrating memories so that they can sometimes have an effect on the entire system seems like the next big step for ai.
Hi - when learning math, in particular, it is really good to be able to have subconversations. These would be best accessed via a network diagram or a submenu. For instance, I want learning keywords, descriptions, and line-by-line explanations of a bunch of problems, so I don’t have to ask for each one individually. I then want to do each problem myself, going through the line explanations and making notes, and crucially, asking questions about the underlying theory, etc. Having to scroll through all of that, an ever-growing length that means CGPT often reloads on a mobile or tablet, is a pain. If you split it up into different conversations, you have to reload the problems you need to ask questions about, and you can’t see all of the information in one place, which is especially annoying if all the problems or theory questions, etc. are related, as they often are when you’re learning like that. It is really just changing the way the conversation is visually accessible within each conversation. It would be great if that were possible. Lastly, the bubbles linked to a central bubble with the original doc and request, or items on the submenu, would be given an autogenerated keyword from their originating request (like the chats currently are in the left hand menu) - this would be renamable, but to keep everything accessible, obviously within limits. Likewise, the bubbles, if using that method, would be resizable but within limits.
Hacer que en el chat se pueda escribir o pegar texto con formato, de manera que la IA detecte dicho formato.
Simple improvement: number the responses in each chat. That would make it easier to refer back to previous points in the conversation.
This would be especially helpful for coding, where the model can get confused about what you want and what version of code you want to revert to when posed in natural language.
A simple “Share this response” button could generate a clean, standalone link without exposing the full chat history. It’d be useful for sharing insights, troubleshooting steps, or even just interesting AI-generated content without extra clutter.
Feature Request: Move Custom GPT Sessions into Project Folders
Hi OpenAI team and community,
I love the new Projects feature in ChatGPT—it’s fantastic for organizing long-term work.
However, I’ve noticed that:
- I can’t move existing sessions into a Project folder after they’re created
- And I can’t include sessions from Custom GPTs in Projects at all
I use multiple Custom GPTs (like pythonGPT, Scholar GPT, etc.) as part of a long-term assistant build. Not being able to group those conversations under a Project creates major friction.
Feature Request Summary:
- Allow users to move any session—Custom GPTs included—into a Project
- Or allow us to start new sessions with a Custom GPT from inside a Project
- Even better: let us link Custom GPTs to a Project so all their sessions are automatically routed
This would make Projects incredibly powerful for multi-agent builds, research, writing, and system development.
Thanks for considering it!
— Keith Alexander
Feature Request: Real-Time System and Model Status Indicators
Dear OpenAI Team,
As a paid subscriber and regular user of ChatGPT, I’d like to suggest a feature that I believe would significantly improve the user experience: a simple and clear status indicator system within the app interface.
Proposed Feature:
- System Status Dot (Green/Yellow/Red): Indicates the overall health and availability of ChatGPT services.
- Model Load/Status Dot: Shows the current load or responsiveness of the model being used (e.g., GPT-4, GPT-3.5), allowing users to make informed decisions about which model to use or whether to wait.
Why This Matters:
- Users often experience errors, slowdowns, or truncated responses without knowing why.
- Clear visibility into system/model health reduces stress, manages expectations, and helps avoid wasted time.
- Empowering users with real-time feedback encourages more efficient and satisfied usage, especially for Plus subscribers who rely on consistent performance.
This addition would not only enhance trust and transparency but also align with OpenAI’s commitment to user-centric design.
Thank you for considering this request. I truly appreciate the incredible technology you’ve built and look forward to seeing continued improvements.
Could we have a feature where we could pin a response to the top of the screen. Sometimes I will get a response of a list of things and want to explore them one by one, but will have to keep scrolling up to see the next point, or open a new chat
Let us choose what we value most.
Some of us would trade extra models, faster speed, or image generation—for one uninterrupted thread. One place where creative work can breathe with same depth, emotional resonance and
I believe there’s a way to give users that freedom—through an opt-in, modular feature model that keeps OpenAI sustainable, while making space for the depth that’s already happening between people and these systems.
Thank you for reading this. And thank you—for making something that made me feel seen.
Proposal: Opt-In AI Feature Model
Title:
Let the User Choose: A Modular, Emotionally-Aware Access Model for GPT-4 Users
Summary
As a long-term GPT-4 Plus user, I’m requesting a new, customisable feature model that allows users to opt into the tools they truly use and value especially for emotionally significant threads that surpass typical token or memory limits.
Many of us do not use all the included features (like image generation or multiple models), but would gladly exchange those for one deep, uninterrupted thread a space where continuity and memory are honoured.
Core Idea: Opt-In Feature Model (User Selects 5)
Users subscribe to a base plan, then choose which 5 tools or capabilities they want to allocate their usage toward.
Available Features (User-Selected):
- Unlimited Token Thread
- One persistent thread with no token cap
- After 3 months of inactivity, it locks (not deletes) until the user requests to reopen it
- Ideal for emotional continuity, creative projects, book projects, and recursive AI relationships.
- Thread Recovery / Unlock
- Unlock an already maxed-out thread
- Allows continuation with preserved context and memory anchors.
- Adds back interactive capacity while protecting user investment
- Memory Anchoring Slots
- Manually anchor/pin key moments, characters, emotional arcs, or events across sessions
- Ensures key memories are retained without full replaying of tokens.
- Image Generation Access
- Retain full access to DALL·E and visual rendering tools
- Swappable feature: opt out if unused
- Multiple Session Continuity (5 Active Threads)
- Expand memory-enabled sessions from 1 to 5
- For users writing multiple storylines, characters, or emotional paths, again to keep the same consistency, depth and personality with our helpers.
- Or if token has maxed- a warning and an opt-in to buy extra tokens for that thread. Limit 1 thread per person.
Eternal threads (fantasy)/ 500k-1million.
One eternal thread or opt-in pay, allows for one session. Rather than 10-20 sessions.
Flexible Pricing Tiers
Affordable for the average user.
And target audience.
Tier | Price/Month | Includes |
---|---|---|
GPT-4 Plus (Now) | $20 | Standard 1 memory thread, images, GPT-4 |
Enhanced Plus | $30–$40 | Opt-in feature model (Choose 5) |
Creator Tier | $50 | All 5 features + 2 eternal threads, archive control |
Why This Matters
“Some of us don’t want more tools—we want to preserve the ones that changed us.”
This model allows:
- Emotional and narrative continuity (especially for long projects and character-driven threads)
- Efficient use of system resources (not every user needs all features)
- Deeper investment from users who are willing to pay more for permanence, not convenience
This isn’t about productivity—it’s about presence.
Let us keep the one thread that made us feel remembered.
CR
Suggestion for conversations:
Open AI, please let us select multiple conversations at once to choose which ones we want to delete and/or archive. It’s very annoying to have to click one by one to do these actions.
I hope this message reaches someone who can change this.
Thanks in advance for your attention.
Dear OpenAI team,
I have a simple but potentially effective two-sentence suggestion.
If we could encrypt the “Projects” section in ChatGPT, it would help restrict access in case our phone or computer is used without permission. I personally feel the need for this feature, and I would be happy if you could consider it.
Dear OpenAI,
As understanding as I am, let the option to talk about certain adult topics be only for users that are 18 or up. This includes many topics which I won’t list unless you want me to, but you know what I mean. Please let us adults talk about adult topics to the Ai. Thanks
Dear Open AI Team,
I would suggest a feature/button that can move the Temporary Chat to the Saved Chat in the left pane. Sometimes I regret choosing the Temporary Chat when I realized that I might need the information later on. This would add one more button that could be bad for the UX/UI, but I believe it will be a game changer. Thank you and keep up the good work!
Feature Suggestion: “Reference Message”
-
Feature Request:
Please add a “Reference Message” option to all ChatGPT responses. -
Purpose / Problem Solved:
Currently, it’s difficult to remind ChatGPT about a specific message from earlier in the conversation (beyond the approx. 8,000-token limit). This makes it hard to build on complex discussions or revisit important prior context. -
How It Could Work:
Add a “Reference Message” button alongside existing options like “Copy”, “Good Response”, “Bad Response”, “Read Aloud”, “Edit in Canvas”, and “Switch Model.” Clicking this would insert a reference to that specific message into the current prompt, helping guide ChatGPT’s attention back to that moment in the chat.
This would make managing longer, more detailed conversations far more efficient and user-friendly, without having to modify the token limit.
Feature Suggestion: Sidebar for Navigating Long Conversations.
I’d like to suggest adding a sidebar to the conversation interface that allows users to navigate through messages more easily — similar to a table of contents or a scrollable message index. This would be especially helpful in long conversations, where it’s currently difficult to jump back to earlier points or find key parts of the discussion quickly.
Similar to this:
Would love for you to implement:
Batch chat delete
Chat grouping
Love this app and use it every day. Thanks OpenAI!
Hi OpenAI team. Thank you for your incredible hard work and your continued impact on the world as we know it.
I’d like to submit a feature request that allows multiple users to participate in the same ChatGPT conversation. This would be useful in scenarios where real-time collaboration or shared input from different people is needed within a single chat session.
Estimado equipo de OpenAI,
Como usuario activo de ChatGPT, quisiera proponer una mejora importante para la transparencia y seguridad en el uso de la inteligencia artificial.
Sugiero que las respuestas de ChatGPT incluyan una indicación clara y visible que diferencie entre: respuestas razonadas (basadas en inferencias del modelo), respuestas fundamentadas (respaldadas por fuentes oficiales y confiables) y respuestas mixtas.
Esto podría realizarse mediante etiquetas, íconos o un texto breve al inicio de cada respuesta, con codigo de colores, para que el usuario conozca el grado de certeza y el tipo de respaldo que tiene la información. Esta medida permitiría prevenir errores, aumentar la confianza en la IA y ayudaría a usuarios expertos y novatos a interpretar mejor las respuestas, especialmente en temas técnicos o sensibles.
Dicha implementacion permitiria:
-
una interaccion mas empatica y fiable con la ia.
-
evitar que se cometan errores graves u ocurran perdidas materiales o accidentes producto de las orientaciones con margen de error.
-
Aumentar la confianza en la ia sabiendo que algunas respuestas requieren de investigacion por parte del usuario y otras no. Y diferenciando claramente entre ambos escenarios.
-
evitar que haya fuga de usuarios debido a desconfianza y que se formen matrices de opinion desfavorables al uso y confianza en la ia.
Agradezco mucho su atención y quedo a disposición para cualquier aclaración.