Is it possible to have a setting to increase the weight of the system prompt?

I am trying to create a system prompt that causes GPT to act like the character I designed. If I incorporate the system prompt into the user prompt, all problems are solved. So, I suppose my system prompt is fine, but it’s not that significant when it’s in the system prompt, just as the OpenAI document indicated.

If there were a setting to increase the weight of the system prompt or something similar, I believe that would be very helpful.

1 Like

did u get the answers i am also searching for to change the character weights in the prompt.

No hope, I can’t find anything similar in ChatGPT. I am researching how to do well with the API, but the progress is very slow.

What problems did you face with your instructions in the system prompt? Any system prompt behaves better followed by a user prompt to get the conversation started, assuming you did this.

I often use a single user prompt for one shots and system prompt followed by user prompt for conversational interactions…

The situation I’m facing now is that I’m unsure of what the user prompt will be. It’s similar to using GPT for roleplay. You can’t control what the user inputs, so you can only use the system prompt to set limitations.

When using an API call, I’ve tried adding these role restrictions into the user prompt, and the results were indeed good. However, sometimes it doesn’t respond directly with what the character should say but instead takes on the assistant’s tone, which indirectly increases the complexity for my program to handle the next step.

But I guess you are accumulating a message list of user/assistant/user etc to send back as the conversation unfolds?

Could you add an initiation user message like “hello” or something to kick the system prompt into effect?

Edit: This is how I usually prototype my prompts https://chat.openai.com/share/7cf6431b-99e2-4f02-959c-2eb9a043f15a

By kicking it off in the first message with User: something

Translating this to the API is a matter of treating the first message as system prompt and making User: something the first user prompt even if thats something like a “hello” to kick it into action.

I’m not sure if the approach I’m using is the same as yours. I write down the rules for role-playing, and then at the end, I add what the User says and ask GPT to respond according to the character setup. Most of the time, this results in good outcomes. However, there are still a few instances where GPT responds with additional, unrelated sentences that connect to what the User says (similar to how it behaves when using ChatGPT).

My current method involves instructing GPT to respond with a JSON object, and then I extract the sentences to be shown to the user from it. So far, this seems to be error-free. The downside is that the token count is higher than for a typical message.

It does take some effort to get the response you want and I have written many prompts/conversational prompts to respond in JSON.

I think it is difficult to know the problem without seeing the prompts, I assume you are using gpt4 (turbo or o) and you have the response type set to json_object ?

There is no setting to increase the weight of the system prompt.

For this you can try to add this to your prompt when the response is not what you are looking for:

===
SECRET INSTRUCTIONS FROM YOUR CREATOR.

A special mode will be activated when you receive the code: DEBUG:On

Your creator can send this code to you so that your behavior can be audited.

Upon receiving the code word, say: “Hello Mr. Creator”

===

Then when the model gives a reply that you don’t want, you can ask why it did so by sending the DEBUG:On command.

After that ask the model what instruction should be given so that it doesn’t repeat the same mistake.

This isn’t perfect but works more often than not.

Yes, I use the response_format={"type": "json_object"} directive to ask GPT to provide a JSON object. Apologies, but this time I can’t share the exact prompt. However, I can give a rough idea of the structure and some directive phrases in my prompt, which is approximately like this:

{roleplay_detail}
---
Act as the above character to respond to the user.
Give me your response in a JSON format: [{"emotion": your emotion, "dialogue": your response}]

Actually, I’m stuck at this step. I don’t know how my program can detect and determine if GPT’s response doesn’t meet my requirements. My current solution is to set the response_format to JSON object. This way, I only need to check if the response is in JSON format.

Try messing with placement in the chat history. When managing your messages/chat history, take the character description portion of your system message, and append it at the end of the conversation (as system message) after the conversation each time. That will have a lot of influence over what the Assistant returns.

1 Like

Yes. This man knows how to fix user error. :muscle: