I second this. it would be great if the chat can be visually presented as a tree of conversations. This would be extremely helpful for someone using chatgpt to learn. There are some key points in the reply that you may want to dive into more detail. Hope this gets approved~
Internal users are running into the current 8,000-character limit for GPT instructions when creating complex or highly-scoped custom GPTs. Even when leveraging Knowledge documents for extended context, the instruction limit forces users to reduce detail in ways that materially impact the functionality and reliability of the GPT.
Current Behavior
When instruction text exceeds 8,000 characters, users receive this error:
âGPT instructions cannot be longer than 8000 characters. If you need to provide more context, you may want to attach a file or knowledge document. Something went wrong. If this issue persists please contact us through our help center at help.openai.com.â
This occurs even if a Knowledge document is already attached.
Problem / Impact
-
Complex workflows, multi-step reasoning, and domain-specific behavior often require more than 8,000 characters.
-
Reducing instruction detail decreases accuracy and breaks intended behavior.
-
Users coming from platforms like Gemini (where âGemâ templates support longer instructions) feel constrained.
-
Attachment of Knowledge docs doesnât replace the need for detailed system-level instructions that drive consistent behavior.
Requested Enhancement
-
Increase the maximum character limit for GPT instructions, or
-
Introduce an âExtended Instructions Modeâ with a significantly higher cap (e.g., 20kâ40k characters), or
-
Enable references to sections of Knowledge documents within instructions to reduce duplication while preserving structure.
Why This Matters
Raising or extending the limit would unlock more powerful, reliable custom GPTs for internal teams and reduce friction for users migrating existing patterns from other platforms.
Hello - thanks to everyone who created ChatGPT. I use ChatGPT to draft long Word reports with lots of references. Right now ChatGPT can only edit the visible text of a .docx, not create or edit native Word endnotes/footnotes (the ones that live in endnotes.xml / footnotes.xml).
Please add full support for:
â creating and editing real Word endnote/footnote objects,
â converting a manual numbered âNotesâ section into linked endnotes/footnotes in the body, and
â keeping note numbering consistent when text is added/removed.
This would make ChatGPT much more useful for serious research report work where proper citations are essential. thanks!
Make ChatGPT a true long-term co-author: project workspaces, continuity and versioning
I use ChatGPT as a long-term co-author on multi-month policy and technical writing projects (multiple reports, annexes, slide decks, letters, etc.). The core limitation is that every chat still behaves like an isolated conversation, rather than part of a single project with stable rules and drafts.
A few changes could make ChatGPT much more usable for serious, ongoing work.
1. Project workspaces with persistent rules
Problem: I have to keep re-stating the same constraints in every chat: house style, spelling, tone, section spine, paper size, fonts, plus custom domain rules.
Request: Create project workspaces with a structured âproject profileâ that can include:
-
Style settings: spelling (for example, UK vs US), sentence length preferences, tone, âexecutive summary firstâ etc.
-
Formatting settings: default paper size, font, heading hierarchy, reference style.
-
Custom Domain constraints.
With an explicit toggle like: âApply these rules to all outputs in this workspace.â
2. Cross-chat project index and retrieval
Problem: Real projects branch into multiple chats: separate threads for annexes, alternative drafts, letters, slide versions, etc. It is hard to see at a glance which draft of a section is âcurrentâ.
Request:
-
Allow users to tag chats and files into a named project.
-
Provide a project index showing key artefacts and last-modified times (for example, âSection 4 â updated 3 days agoâ).
-
Let the model reference them explicitly via prompts such as:
âUse the latest version of Section 4 from this project as the base and tighten paragraph 3.â
This would avoid the repeated âplease scan this chat for the latest versionâ .
3. Version-aware document round-tripping
Problem: ChatGPT generates a document, I download and edit it locally, then re-upload and ask for targeted changes. The system often treats this as a brand-new file, without any sense of the previous version, and sometimes rewrites more than requested.
Request:
-
When a user uploads a document that likely originated from ChatGPT, offer âlink this to the previous versionâ.
-
Internally, track version lineage so the system can:
-
show a simple âchanges since last ChatGPT versionâ diff, and
-
reason about what has changed since it last touched the file.
-
-
Provide a âsurgical editâ mode:
-
I specify the scope (for example, âonly update the box in Section 3.2â or âonly rewrite bullets under Heading 4.1â),
-
the model guarantees all other sections remain untouched, including styles and references.
-
Why this matters
These changes would move ChatGPT from âvery smart single-session assistantâ to a credible long-term co-author for users who are actually producing complex, versioned work over months: policy reports, legal documents, academic outputs, technical documentation, etc.
Right now a lot of the friction is not âintelligenceâ but lack of stable project context and version awareness. Fixing that would unlock a huge amount of value for heavy, professional users.
Hi
Sometimes when Iâm chatting with ChatGPT, I ask follow-up questions that take the conversation in a different direction just to clarify a specific section. However, this can pull the discussion far away from my main topic or goal.
Right now, the only way to get back on track is to either manually steer the conversation back (which is often messy) or start a new chat (which creates many separate chats and loses the original context). The new âbranchâ feature helps a bit, but it can also create too many parallel branches and become hard to manage.
I would like a feature that allows me to âdropâ or discard a conversation from a certain point onward. In other words: choose a previous message in the chat, discard everything that happened after it (both my messages and ChatGPTâs), and continue the conversation from that point as the main thread, without creating a separate new chat.
This would make it much easier to experiment with side questions, then cleanly revert to the main concept once I am done exploring a tangent.
Feature request: Project-level external knowledge sources (Google Drive folders)
Iâd like the ability to link a specific Google Drive folder to a ChatGPT Project (not just a single chat).
The folder would be explicitly user-approved via OAuth and scoped to that folder and its subfolders only. ChatGPT could index and auto-sync those files and use them as retrieval-based context across all chats within the Project, rather than manually uploading files (how it currently is).
This would differ from current connectors/company knowledge, which are chat-invoked rather than persistent at the Project level.
Ideal controls:
-
Folder-level scoping and preview before approval
-
Explicit indexing toggle (search-only vs indexed)
-
Clear re-sync status and deletion propagation
-
Retrieval-only usage (not global memory or training)
This would greatly improve long-running projects like research, coursework, codebases, and documentation workflows.
I would really like to have a âstore prompt barâ . That is not the same as the history of prompts. Some prompts are repetitive and I would like to be able to organise instead inside of the ChatGPT app. Today I belive many users are using third party tools to store their favorits prompts and I dont see how this could not be used inside of ChatGPT app.
/Pablo Magana
[my email]
Feature request: Troubleshoot / Walkthrough Mode
Add an optional mode for technical help where ChatGPT provides one command or action at a time, pauses for explicit confirmation (âworked / didnât workâ) or pasted output, and only proceeds after results are known.
This would prevent invalid advice after early failures, reduce wasted responses, and better match real troubleshooting workflows (e.g., Plex, Linux, networking).
Hello OpenAI Team,
I would like to propose a feature to add somewhere in the future - Pin specific response from ChatGPT. For example, AI builded Roadmap or guideline on a specific topic, and I, the user, want to pin the main Roadmap and focus on specific phases later without being afraid of losing the main list. Because there is no feature as such, users with that usecase need to copy and paste this roadmap somewhere and look at different places at the same time, which is not very convenient.
Thanks,
Shamil
Hello OpenAI team,
Iâd like to suggest a workflow improvement for Custom GPTs that would significantly reduce friction when managing lots of chats.
Feature request:
-
Add an optional capability that allows a Custom GPT to set (or at least reliably suggest with a one-click âApplyâ) the chat title, driven by the GPTâs Instructions.
-
Example instruction concept: âSet chat title to: â â, where the GPT extracts those fields from the userâs first message (or from provided text).
-
If the GPT cannot confidently extract the fields, it should ask a single short question and then set the title.
-
Provide a fallback: if is missing, title becomes just .
Why this matters:
-
Renaming chats manually is repetitive and error-prone, especially when running many conversations that follow the same pattern.
-
Title automation would make chats far easier to organise and retrieve, particularly inside Projects.
Thanks for considering this improvement.
Here is my idea to this problem of clunkyness that I would like to request as new feature on chat gpt this is my Idea as a writer on how to make using this platform easier for writers please consider it as a feature on this app
Feature request: Linked Project Chats for Writers
Iâm a long-form fiction writer and worldbuilder. A major limitation right now is having to choose between:
-
one massive chat clogged with lore and brainstorming, or
-
multiple chats where context, continuity, and character voice drift over time.
A huge improvement would be a linked project system where:
-
one AI instance is shared across multiple chats
-
e.g. a âLore / Canonâ chat and a âChapter Writingâ chat
-
both chats reference the same locked canon, characters, and rules
-
writers can create freely first, then later ask the AI to cleanly split an existing long chat into:
-
a lore bible
-
ordered chapters
-
This would dramatically reduce continuity errors, cognitive overload, and frustration for serious writers, while making long projects easier to read, revisit, and export.
I wouldnât want royalties â I just want better tools for writers.
I use ChatGPT heavily for long-running work, study, and creative projects and currently have a very large number of saved chats.
There is no way to bulk-select chats to archive, delete, or move them into Projects, which makes ongoing organisation time-consuming and discourages long-term reuse of prior work.
Suggested improvements:
⢠Multi-select chats in the sidebar
⢠Bulk chat selection to archive, delete, move to a Project
⢠Optional filters (date, project, activity)
I currently use Projects as folders, but I have a whole heap of chats that I need to file, archive or delete, and itâs super tedious doing it one by one.
It would be more convenient to have checkboxes in the âIf you want, I can:â section to choose topics I would like to continue with (or a similar/easier way to choose them) and not to have to write them to the prompt bar
I use gpt mostly to dick in to diffent themes, for example I try to learn the SOLID patterns.
In this case it would be helpful to have threads inside the chat to have a distinguished chat to ask for specification but donât blow up the main chat, by highlighting specific passages (e.g 5. Dependency âŚ)
To have a better view, those threads should be displayed under the specific chat.
This allows me to use gpt as learning tool with a look-up later feature.
Idea: Chat as a Workspace
A possible direction is to allow the AI to create and manage lightweight UI elements directly inside a chat (panels, pinned fields, summaries), driven entirely by the conversation itself. The chat text remains the source of truth â effectively the âsource codeâ â while the UI is a dynamic visual projection built and continuously updated by the model.
For this to achieve, I guess, no fundamentally new data structure is required. Most of the needed state already exists in the message history, system instructions, and inferred conversational context. The feature mainly adds a mapping layer that makes this implicit structure explicit and user-visible/interactable.
This would let chats behave like app-like workspaces (progress tracking, learning, planning, etc) without turning chat into a separate application model â the UI is just a lens over the conversation, not a parallel workflow.