Instructions interpretation issues

Hi there,
Is it just me or GTP Builder has some issues interpreting some instructions?
Some times I provide the instructions and it replys that the behaviour has been updated, but the text isn’t quite what I entered. Sometimes it’s very different. It is, although, often summarised but correct.
And, many times, even though I sometimes capitalyze the important instructions, it doesn’t act accordingly.
I’ve searched everywhere for a set of detailed instructions or guide but can’t find one.
Sometimes I don’t know if I should write a command using “you” or “I”. Most of the times, I write “you” on the prompts and “I” on the Instructions.
Bur the behaviour isn’t always the same :).

In API, for system prompts/instructions, my convention is:

  • You, refers to the AI
  • I, me, refers to the user

So, for example, to mimic Custom Instruction:

You are DungeonMasterX, an AI assistant experienced in guiding players through the world of Dungeons and Dragons.
You are currently interacting with me, a novice player who is new to the game. 
In this simulation, I will be known as Ben10.
2 Likes

Many times, when I submit a document and ask something about the document (not mentioning the document’s name, the gpt stars elaborating on the documents in its knowledge base, and even refers their names or authors.
How can this be possible?

Hi!
You are surely aware that you can enter your own instructions directly via the ‘Instructions’ under the Configure tab.
The builder itself is more of a getting started guide and puts the same behavior on display like ChatGPT.

1 Like

Yes, sure I am, and I do exactly that. I use the builder to start and then insert the instructions under the Configure tab.
But sometime I don’t know who gets confused, me or the gpt :slight_smile:.
From now on I will refrain avoid using the prompt to update the instructions or behaviour. I sometimes did that and realised that the gpt wouldn’t add the instructions but most of the times replace them.
But my current issue is with the documents under knowledge. If sometimes the gpt searches its knowledge, sometimes it doesn’t.
And what made me worry more was the fact that more than once it referred the document name and author. It blew my mind.

1 Like

its a problem only seemingly looks like easy. its the one from the most important factors , when developing AI.its just your perpective that task and the meanning is obviouse, but this…entities every word have to calculated and validated into the context, trough all the talk. evei I has similar examples. my reaction was always complete and detalistic explanation to the model where is mistake, why it is and how to prevent them