Instructions interpretation issues

Hi there,
Is it just me or GTP Builder has some issues interpreting some instructions?
Some times I provide the instructions and it replys that the behaviour has been updated, but the text isn’t quite what I entered. Sometimes it’s very different. It is, although, often summarised but correct.
And, many times, even though I sometimes capitalyze the important instructions, it doesn’t act accordingly.
I’ve searched everywhere for a set of detailed instructions or guide but can’t find one.
Sometimes I don’t know if I should write a command using “you” or “I”. Most of the times, I write “you” on the prompts and “I” on the Instructions.
Bur the behaviour isn’t always the same :).

In API, for system prompts/instructions, my convention is:

  • You, refers to the AI
  • I, me, refers to the user

So, for example, to mimic Custom Instruction:

You are DungeonMasterX, an AI assistant experienced in guiding players through the world of Dungeons and Dragons.
You are currently interacting with me, a novice player who is new to the game. 
In this simulation, I will be known as Ben10.

Many times, when I submit a document and ask something about the document (not mentioning the document’s name, the gpt stars elaborating on the documents in its knowledge base, and even refers their names or authors.
How can this be possible?

You are surely aware that you can enter your own instructions directly via the ‘Instructions’ under the Configure tab.
The builder itself is more of a getting started guide and puts the same behavior on display like ChatGPT.

1 Like

Yes, sure I am, and I do exactly that. I use the builder to start and then insert the instructions under the Configure tab.
But sometime I don’t know who gets confused, me or the gpt :slight_smile:.
From now on I will refrain avoid using the prompt to update the instructions or behaviour. I sometimes did that and realised that the gpt wouldn’t add the instructions but most of the times replace them.
But my current issue is with the documents under knowledge. If sometimes the gpt searches its knowledge, sometimes it doesn’t.
And what made me worry more was the fact that more than once it referred the document name and author. It blew my mind.

1 Like