In a Custom GPT, can I use Triggers to have the GPT perform message actions?

I have a multi-shot SYSTEM PROMPT that walks a user through developing a plan. The initial prompt sets the context for the user and instructs them on what I need first (information from them) in a certain structure. I also give them an example in that initial prompt shot.

From that point, I want to ‘trigger’ the next User Message and pass a template into that next User Message for them to fill out so that the structure is maintained and I don’t have to rely on them copy and pasting the structure from the original SYSTEM PROMPT shot.

I have tried several different ways in code (Python) to support the state, trigger, and simulate, but to no avail.

Perhaps a GPT is not the way to go, and I should move on to Assistants, but this seemed like a logical approach.

For reference, the SYSTEM PROMPT I have built walks the user through 6 sets of questions before generating their plan, and I have a large document set behind the GPT that responds to the plan with actions and goals - based on their inputs.

I would really appreciate any insight on how to manipulate the message to pass a template through from the step-through SYSTEM PROMPT at appropriate intervals.

Thanks for any ideas in advance.

Hi!
From what I understand you want to dynamically adjust a part of the instructions based on the state of the conversation.

Top of my head I would suggest to use an action that will execute regardless to get the currently needed part of the instructions.

Does this make sense to you and for your approach?

Thank you. For a bit more clarity, with a visual, the Prompt is written in Python, which produces the output you see in the image. I essentially transformed the System Prompt in Python, believing that I could manipulate the structure of the presentation more precisely.

The process involves a six- or seven-step prompt chain, all contained within the GPT’s system prompt. At the end of the chain, the GPT creates a comprehensive plan for the user, a method I have manually tested extensively - prior to this point.

However, I would like to automate a part of this process so that the user doesn’t need to copy or paste information already provided by the system prompt, such as ‘Use this template to complete your response.’

I am currently working on getting the GPT to pass a 3-line template from the initial system prompt once the GPT starts up:

  • It welcomes the user and explains the GPT’s purpose.

  • It asks the user to provide three pieces of information.

  • It provides an example of the information needed in the desired structure.

  • It asks the user to provide their information in the new message box.

At this point, I want to include a 3-line template from the previous system prompt message, with the required structure, and pass it through to the new User message (shown below)

This does two things:

  1. It provides the user with the correct starting point and definition.
  2. They don’t have to copy/paste it from above (initial prompt).

I have been looking for ‘Actions’ examples, to try and figure this out. This may not be possible because my level of abstraction from the Chatbot interface library may prohibit me from manipulating a new user message box in this way. I just thought this might be easier than it has turned out to be.

Thoughts based on the example?

Ok. I now understand that your aim is to prefill the chat box of ChatGPT for your custom GPT.
This is not possible at the moment via the tools provided for Custom GPTs alone.

I can provide some ideas how to resolve this issue but they all require user actions. For example using a companion browser extension or forwarding the user to another webpage or offering a download link to a pre-filled document that will later be uploaded again by the user.

Using the Assistants API or writing something similar using chat.completions would be another option. Then you have more control over the user experience.

Thanks for your help (again).

What would be ideal here—for task-based development of GPTs that enterprises might use to create operations across applications on OpenAI—would be to expose the underlying library for the GPT Chatbot here, in the GPT itself, so that your Python system prompt could leverage calls into it and manipulate the runtime UI.

I may shift over to Assistants API or perhaps set up an app and then use the APIs to call the model backends. We were considering OpenAI Enterprise because of the SSO capability, the walled garden operational structure, and the fact that GPTs would work well for us on smaller, task-based needs.

Maybe that goes into the OpenAI GPT roadmap (parking lot), but this would surely give more capability. We’ll see.

Thank you again for your response and your help. - Cheers