How to make my GPT follow instructions?

I have created several customGPT but cannot seem to figure out or find the way to make my GPTs follow instructions without deviating during the course of the conversation. I also want each of them to adhere to certain specific criteria and rules when responding to questions or offering analysis. Like obtaining real-time information from online resources before responding.

Thanks for any suggestions.

trip

1 Like

When you try to make a Custom GPT perform searches, it often ends up summarizing the content and just saying “for more details.”

So, if you want it to use online resources, you may need to employ your own APIs to fetch the data.
Also, it seems there are guidelines that prevent generating a large amount of text in a single conversation.

One approach might be to limit the amount of response per conversation while emphasizing important criteria and rules using Markdown.
For complex tasks, it might be wiser to handle them server-side via API.

3 Likes

Thanks for the reply and suggestions.

I’m still under the impression we can program our GPTs to adhere to specific rules with quotes or something definitive. I’ve created a few that stick with the rules I provide … but that seems to only be the case for questions initiated by convo starters.

There has to be a simple way to give the GPT a baseline for operating logic. Like, always do these things prior to responding. Or, if the conversation or questions veer from core functionality, start the conversation over with starters?

Custom GPTs operate through natural language instructions, the browsing capabilities of ChatGPT, DALL-E 3, and communication through external APIs.

Currently, it is not possible to establish operational logic in the same way as programming languages, ensuring it always follows those rules.

In practice, when using Custom GPTs, if a user becomes frustrated, it often means that the Custom GPT has forgotten its role and shifted away from the intended topic.

Instructions like “do not discuss anything else” or “only respond to this topic and steer back to it” may be effective, but they are not guaranteed, and there is always the possibility of deviating from the topic.

When a GPT used ChatGPT’s version of gpt-4-turbo up until May, it followed programming instructions quite well. Then the model was switched and has continued to be affected by changes.

The current latest gpt-4o variant (which is unselectable and used for GPTs) doesn’t follow instructions with the same quality. Similar to developing with gpt-4o on the API, the attention to “you are a GPT, do this” instructions worsens as conversations grow longer with chat and tool returns (like large web pages in original HTML). Eventually, it’s as if you’re basically talking to ChatGPT again, and the “app” aspect of your GPT is diminished and effectively bypassed, even jailbroken.

1 Like