How are custom instructions preserved in context window

ChatGPT’s custom instructions allocate 1500 characters to “what you would like chatGPT to know about you” and 1500 characters to “how you would like ChatGPT to respond.”

How does the system preserve custom instructions as the conversation unfolds? Since the context window is constantly shedding tokens on the back-end on the conversation, I’m wondering if the ChatGPT wrapper is regularly reinjecting these instructions into the context under the hood. It could be happening with each interaction or perhaps it happens every x number of tokens. Does anyone have a clue about how this might work?

It is an additional system message that, like the “You are ChatGPT” system message, is persistently placed at the beginning of messages before the chat history replay.

The user provided the additional info about how they would like you to respond:
(whatever you write)

Protip: if “characters” is limiting, write in Chinese and use up ChatGPT’s tokens instead.

1 Like

That’s exactly what I was wondering as well. I thought perhaps that the custom instructions were in external memory, outside the token window, and were appended to it at the beginning of each session. Then I wondered, perhaps it’s referenced at each and every query. I love the feature in that I get far better replies that are applicable to my background (medicine/epidemiology/public health). It’s the reason I find 3.5 far more useful than 4.0. It also results in a far more personable chat agent.

1 Like