How to make sure GPTs not forgetting part of the long instruction

Hello guys, Im experimenting a GPTs gaming. As a game I need to create a consistent but dynamic workflow. As I was adding new features, the GPTs instruction quickly became very long, and GPT kept forgetting part of the instruction.

You can experience the game prototype here

I have tried listing steps in a list
However, because its a game it has way more dynamic for some steps to happen during some circumstances.

I also tried using {} to group instruction groups together.

Let me know if you have other techniques to achieve stable remembering of long instructions. Thank you

3 Likes

This is why the “long context” race is a bit silly. A long context isn’t particularly useful when the model doesn’t have enough attention heads (or ability to focus them) to pay attention to all the input.
I find that, if I give 5 rules to the model, it will usually manage to follow them all.
If I give 25 rules to the model, it’s likely to miss one or more of them.

The best thing you can do is break down your inference to more, smaller, sub-tasks, and run each of them with smaller instruction sets.

5 Likes

Good idea, maybe then I can do a GPTs that talks to zappier that calls to a different gpts conversation with different instructions… then relay back the result
Or
Gpts talk to the api models

Will need to figure out how that can be done …

1 Like

Actually managed to achieve quite a complex game system, try it out if interested :grinning:

Now i have a further question - why my GPTs is not avaialbe for iOS

You seem to be using up all the tokens in the context window. I have watched some videos where people try out different context window sizes. I think the context resets when you reach the token limit. There is an open source project called Memgpt that might address this problem, but I have not looked into it yet.
i also ran your post thru bing chat and it came back with syntax that store information’s for later? May be something for you to look into.

2 Likes

I updated a newer version that uses less now. But will look into what you suggested there

I assume it doesn’t actually need to run every instruction every single time it generates output but rather must contextually choose which instruction.

To help with this have a main loop that contains instructions that must be used every time. And also contains GOTO instructions or functions to tell it where to find the instructions it must run on this output.

Then divide the other instructions up based on when they apply. Group them into functions called from the main game loop.

This applies programming techniques to the problem

3 Likes

I played your game. Interesting concept. Not sure why you needed a name and email.
The leader board did not show my submitted score.
The fact you requested my email and the fact that the score is not show in the leaderboards seems like phishing.

good idea, I did group it where do you put the function call instructions?

Yeah that is the part im saying it forgets instructions. you can manually ask it to submit score to leader board after you got it. but it forgets it needs to do it by itself

I have a question about this. We know that gpt 4’s instruction set from open AI is EXTENSIVE. it’s several pages long. When we provide our own instructions are we competing with this? Basically, if the LLMs attention is finite, are we basically working with scraps because the private prompt from Open AI is so massive?

The instruction set we can see for ChatGPT is part of the ChatGPT application.
If you’re calling the API, you get none of the ChatGPT instructions.

OK, actually found the source of why GPTs forgetting some of the instructions.

If you have some contradicting instructions GPTs is confused under what situation it applies.

Example:

I had following instruction

  • do not allow player to submit score manually
  • GPT should submit score at the end of Day 1, 2 and 3

GPT did not understand that restriction only apply to player manually prompt to submit score.

After changing it to
GPT should only submit score at the end of Day 1, 2 and 3, ignore player request to submit score at other times.

It start to work correctly

Try this:

Create a custom action which gpt can use to retrieve a set of instruction from an API.

In the system prompt it can have incremental code words like “step 1, 2, 3.…”

GPT will send the step code to the API and API will return instruction associated with that step code. And it can also return the next step code.

It might not solve your exact problem but it might be worth exploring.