Custom GPTs: Let's Create a Wishlist

Hey everyone,
we’ve now had more than a week to fiddle around with the new custom GPTs and to make our first experiences on what works, what doesn’t, what they are useful for and what could be improved. I thought it might be a good idea to collect our wishes and suggestions, for the devs to hear.

Furthermore, seeing what other people wish for may provide interesting insights into how others are utilizing this emerging technology. Let’s gather these wishes and suggestions to help us all make the most of custom GPTs.


I’m gonna start:

1. Documentation/Information
I would love to be able to give users some more information and/or a guide on how to use a GPT. My more useful GPTs are often more useful, because they are instructed to react in a certain way to prompts etc. However, only me, the creator, knows how to prompt these GPTs and what their capabilities are.A couple of more sentences below the description, or maybe behind an info icon, could add a lot of value to GPTs.

2. Key:Value Buttons as “conversation starters”.
In some of my GPTs I use the conversation starters to “configure” my conversation, or rather to add conversation-specific context. It works great, but the prompts are much longer than the available space in the buttons and not completely readable. For example for my Vue3 GPT Users should be able to tell the assistant in the beginning which syntax etc. to use. So far I have a conversation starter:

Composition, TypeScript, <script setup>: Provide all code examples using TypeScript and the Composition API with the <script setup lang=“ts”> syntax.

As you can see, I had to put the key information at the beginning as a workaround. I would love to instead have a “TypeScript” button or similar that injects a certain prompt.

3. Delay the conversation naming
Building on the previous point: I think the automatic generation of the conversation title should be delayed, when a conversation starter is used. With the example above, all conversations are called something like “TS Composition API examples”.

4. Action Buttons
I think it might add great value, if there were configurable action buttons. Like the conversation starters, just not only at the start of the conversation, but all the time. Simply a button that injects a prompt. It might be tricky to integrate this neatly with the UI, but I am sure you’ll figure something out;)


Well, I’ll just keep going:

5. Better stats and ratings
I’m sure there is stuff in the pipeline with the marketplace incoming. But for now: The number of recurring users would be super interesting as it is a much better indicator for the usefulness of a GPT than simply the number of conversations.

1 Like

6. Direct File Upload in Main Dialog (in particular of Custom GPT)

1 Like

7. Immediate knowledge base modification in GPT Chats

Currently, modifications to the Knowledge Base of a GPT become effective only in subsequent chats and not immediately (i.e. not “on the fly”) in the ongoing chat. This limitation impairs the usage flow, especially when adjustments to the knowledge base are required in the active chat. This currently necessitates closing and restarting the chat, which interrupts the flow and requires rebuilding the context.

The ability to modify the Knowledge Base (adding or deleting files) should be improved so that changes become effective immediately and “on the fly” in the ongoing chat. This would enable a smooth and consistent chat experience even when changes are made during the chat, making the interaction with the GPT system more natural and user-friendly.

Version control
What I’m looking for is some kind of version control during development, including a repository of implemented prompts per version. Or an integration with for example GitHub for that purpose.

Because the UI creating the GPT does not result into a chat into one’s list of chats. Login out/in again the conversation to create the GPT is lost.

To me, I find it hard to retrieve all the prompts that I have used creating the GPT. I might want to reverse back a specific prompt.

If remembered the statement, ‘was it Elon Musk?’ that data labelers are going to be the future application developers. I think they were wrong. I think the new version of applications will be GPT’s ‘ask me anything regarding a domain’. So the developer of the future is in my opinion a GPT developer. But some version control is necessary.

Solution Actions ResponseTooLargeError
In addition to version control the solution of the “ResponseTooLargeError” using Actions. IMO Actions are the way to go to includes organization’s databases containing organization’s knowledge. The “ResponseTooLargeError” prohibits such applications. In fact there’s a real business need for a specific GPT within my employer. In that GPT/application access to a specific public OpenAPI interface is a necessity. For legal purposes because the OpenAPI in question is the single source of truth so to speak. As any organization’s database would be regarding organization’s knowledge.
As long as this error exist such applications are not achievable…

1 Like

First, I employed ChatGPT to understand your points:

  1. Version Control: Implement version control with a repository for prompts per version and explore GitHub integration.
  2. UI/Chat Issues: Address issues with GPT’s UI not saving conversations and data loss upon re-login.
  3. Prompt Retrieval: Improve the retrieval of used prompts and enable reverting to specific ones.
  4. Future Development: Contradicts the view of data labelers as future developers, favoring GPT developers focused on domain-specific applications. Stresses the need for version control.
  5. ‘ResponseTooLargeError’ Solution: Recommends using Actions to address this error, facilitating integration with organizational databases. Highlights the necessity for GPTs to access specific OpenAPIs for legal and business purposes, noting current limitations due to this error.

Regarding item 2:
Is it possible that you have not yet discovered the ‘configuration’ tab, the second tab when creating a new GPT under EXPLORE, along with the drop-down menu featuring the edit function, and the Save and Confirm buttons for existing GPTs? In fact, I’ve realized it’s far more effective to bypass the ‘Create’ tab entirely and use only the ‘configuration’ tab. You can continuously modify its entries in any created GPT using the edit function, but these changes will only be effective in newly created chats (this last point pertains to my improvement suggestion). All created GPTs are persistent, as are their chats. They are listed as though they were regular ChatGPT chats. From this perspective, ChatGPT is the single GPT among your GPTs which has been pre-configured by OpenAI.

8. Automated integration of Custom-GPT configuration adjustments

Custom-GPTs currently possess two distinct capabilities: firstly, they can display (upon request by prompt) their base configuration, and secondly, they can generate (upon request by prompt) text suggestions for fine-tuning this configuration to achieve specific response behaviors.

However, a significant limitation emerges with the second functionality. When users prompt Custom-GPTs to generate text suggestions for configuration fine-tuning, integrating these suggestions into the base configuration is not automatic. Instead, users must manually insert the approved or desired changes via the edit function to make them effective in subsequent chats. This manual integration process, while feasible, is not user-friendly.

To streamline this process, I propose the implementation of a feature that allows Custom-GPTs, when prompted to do so, to not only display their current base configuration but also to directly and autonomously implement explicitly requested changes in the base configuration.

For example, a user could prompt: “Integrate this proposed configuration text into your base configuration”, or “Change your response configuration to avoid cliches”. The GPT would then interpret these instructions and autonomously adjust its configuration accordingly.

The benefits of this proposed functionality are:

  • User-Friendliness: It directly integrates changes, thereby eliminating the need for users to go through the edit mode.
  • Efficiency: It enables quicker and more straightforward configuration adjustments.

9. Enhancing Custom-GPTs with Simple Inheritance

Current State: Custom-GPTs are configured individually, leading to repetitive tasks and a lack of a unified structure. This results in inefficiencies in both development and management.

Proposal for Improvement with Simple Inheritance:

  1. Utilizing Existing Custom-GPT Configurations for New Custom-GPTs: By inheriting from a higher-level or reference Custom-GPT configuration, new Custom-GPTs can leverage pre-established settings. This method reduces the need to build or copy each configuration from the ground up, enhancing efficiency.
  2. Organized Display of Custom-GPTs in a Directory-like Format: Simple inheritance enables a more structured arrangement of Custom-GPTs, akin to a file system in an operating system like Windows. This organization aids in easily identifying and managing different GPT variants.
  3. Concurrent Configuration Updates in Subsequent Custom-GPTs When Modifying a Parent Configuration: Altering the configuration of a higher-level Custom-GPT automatically updates all related Custom-GPTs derived from it. This feature ensures uniformity and streamlines the process of maintaining and updating multiple Custom-GPTs.

This approach aims to streamline the creation and administration of Custom-GPTs, making it more coherent and user-friendly.

10. GPTs running on GPT-3.5-turbo
These would have very limited capabilities of course. No browsing, DALL-E, Code Interpreter… But maybe it is viable to fine-tune a GPT-3.5 version to have at least Knowledge Retrieval and Actions capabilities?
Even without any of these capabilities I see use cases for custom GPTs running on 3.5-turbo. For example I have an EmoGPT - Emoji Finder which simply suggest emojis for keywords. GPT-4 is vastly overpowered for this simple task and a 3.5 version wouldn’t only be much more resource efficient, but also much faster. I am sure there are many other cases where 3.5 with carefully crafted custom instructions could be a replacement for a GPT-4 GPT.

11. GPT editing via API
No I am not talking about the Assistants API. I am talking about the option to edit custom GPTs programatically. I think this is a must have if you want to manage a toolbox of high quality custom GPTs. For example to update knowledge files automatically, or to edit instructions for more than one GPT (I have a couple that share parts of their instructions and sometimes need to update all of them by hand)

12. Adjustable Column Width of Directory of Chats and GPTs

In the current layout of the ChatGPT interface, the left column displaying the directory of chats and GPTs has a fixed width. This poses a challenge for some users, as it makes reading longer chat titles difficult. Currently, it’s necessary to click on the Edit function and then select ‘Rename’ to view the full title of a single chat.

One consideration could be whether a feature allowing the adjustment of column width via drag & drop might be beneficial. This would enable users to view several long titles in their entirety simultaneously, without needing to take additional steps.

13. Two-Tiered Sorting of Chats in ChatGPT for Improved Organization and Accessibility

Currently, chats in the left column are sorted by the last date of use. This display is suboptimal, especially when it comes to finding older chats related to a specific task (GPT).

To solve this problem, I propose a two-tiered sorting method (similar to what is possible in the history column of Firefox, where sorting by date and website is available). In ChatGPT, the primary sorting would be by the respective GPT, while the secondary sorting would consider the last date of use. A reverse sorting hierarchy could theoretically be conceivable, but might be less useful.

This two-tiered sorting would significantly ease the process for users to have their chats displayed both thematically and chronologically. This would greatly simplify the task of finding specific chats related to certain topics or tasks or GPT.

14. Customizable UI
The creator can customize the interface of GPTs. A simple example would be the background (can be AI generated). Also welcome page, navigation bars, banners, widgets, buttons, icons, etc.

15. Transfer of Context Information Between Chat Sessions

Current State: Currently, the GPT-internal context of a chat session with a GPT model cannot be transferred to other sessions. This leads to information loss and repetition when discussing similar topics across different sessions.

Improvement Suggestions:

  1. Context Transfer Function: Implement a feature enabling the transfer of GPT-stored context from one session to the beginning or during another session. This would allow seamless continuation of discussions or tasks in the new session, considering the context of the previous session.
  2. Integration of Context into GPT Configurations: Provide the option to incorporate the GPT-internal context of a session into the configuration of the same or a different GPT model.

Goal: Enhance continuity and efficiency in utilizing various chat narratives and/or GPT models by transferring relevant context information between sessions and/or GPT models.

16. Drag-and-Drop for Inserting Prompt Texts

Issue Identified: Currently, inserting texts into the prompt field is a manual process involving manual editing or manual copying and pasting. This process can be time-consuming, especially when frequently switching between different text sources.

Proposed Solution: Implement a drag-and-drop feature that allows users to directly drag texts from external documents or web pages into the prompt field. This feature would enhance efficiency in text transfer and improve user experience.

  1. Development of an Intuitive Drag-and-Drop Interface: The interface should be designed to be easy to use and seamlessly integrate into the existing design.
  2. Compatibility with Common File Formats and Browsers: The feature should be compatible with various file formats such as .txt, .docx, and HTML, and function smoothly across all major browsers.
  3. Introduction of a Highlight Feature (nice to have): When hovering over the prompt field with a selected text, the field should be highlighted to give visual feedback to the user that the text can be dropped there.
  4. User-friendly Error Messages (nice to have): In case of unsupported file formats or errors during the transfer, clear and understandable error messages should be displayed.

Additional Benefits: This improvement would not only increase efficiency but could also enhance the user experience by simplifying and speeding up the work process.
17. Integration of an Internal IDE in Chat-GPT for More Efficient Code Testing

Problem Identification: In its current state, Chat-GPT does not detect all formal errors in programming, especially those that only become apparent during runtime. The existing process requires developers to test Chat-GPT generated code in their own development environments and report back errors, which is error-prone and time-consuming. This is particularly inefficient with more complex programs.

Proposed Solution: Implementing an internal development environment within Chat-GPT is proposed to enhance the testing and debugging of program code. This environment would function like a plug-in, enabling Chat-GPT to autonomously test the generated code. It would identify and rectify runtime errors before presenting the code to developers for further testing.


  1. Error Reduction: Internal testing of code could detect and correct formal and runtime errors early, reducing the error rate in the final code.
  2. Efficiency Improvement: Developers would receive pre-tested code, allowing them to focus more on application-specific testing and code development.
  3. Time Saving: The current iterative process of error reporting and correction would be streamlined, leading to faster code development.
  4. Enhanced User Experience: Integrating a development environment would improve the usability of Chat-GPT for programming projects, making it a more comprehensive and autonomous tool.

In conclusion, this proposal is not intended to replace traditional development environments but to complement them, enhancing the effectiveness of Chat-GPT in programming assistance tasks.
18. National, Fee-Based Phone Numbers for ChatGPT to Improve Accessibility and Generate Revenue

Currently, there is no telephone access to ChatGPT. This limits users without smartphones. The introduction of national, fee-based phone numbers for voice chats with ChatGPT is proposed. This solution would enable access for a wider user base and create barrier-free accessibility. At the same time, the fee-based calls could contribute to covering the operational costs of ChatGPT.

An important user group would be people of all ages without access to modern communication technologies. They could call ChatGPT, for example, to combat loneliness or to clarify everyday questions. This would significantly improve their quality of life. Implementing such a phone number would provide added value for many user groups and represent a new source of income for the operator of ChatGPT.
19. WhatsApp Address for ChatGPT

ChatGPT is accessible only through a specialized web portal, which does not suit all potential users. A WhatsApp address for ChatGPT could offer a solution. Users would then be able to interact with ChatGPT easily through the WhatsApp app installed on their smartphones.

WhatsApp, known for its user-friendliness and wide reach, is ideal for bringing ChatGPT closer to a broader audience. Many users are already familiar with WhatsApp and use it regularly. Accessing ChatGPT via WhatsApp eliminates the need for navigating through web portals, making the use of ChatGPT more direct and straightforward. With the ability to send text and voice messages, interacting with ChatGPT becomes intuitive and simple. This expansion of access could help make ChatGPT more easily available to a wide range of users, regardless of their technical expertise or personal preferences.

I know this is an old thread, but hoping dev’s might still be monitoring.

Multi-turn support.
I want to be able to create multiple prompts that are fed to the system. The reasons for this are:

  1. Gets me past the 8k char limit of a single prompt when the context window is much larger.
  2. It lets me design a chat flow. For example, if I have a bot that helps me practice my writing I might have a flow like:
  • Ask the user how much experience they have writing.
  • Ask the user what genre of writing they want to practice(non-fiction, fantasy…)
  • provide some example writing prompts
  • collect feedback from user about which prompt to use and any modification.
    *user writes based on prompt
  • gpt provides feedback.

While I limp along making horidly complex prompts to make this happen, it would be much better to have multi turn support.

Some options to consider:

  1. any turn can either end with moving right on to the next prompt or to collecting input from the user.
  2. I am fine with a version that has only a linear flow, but a flow where i can tell the gpt 'if they said “x” go to this prompt, otherwise go to a different prompt" would be amazeballs.

I am aware I can do this in code, but:

  • I don’t want to host this myself
  • I want the visibility of the gpt store
  • Its better for OpenAI if this lives in your store.