How to get GPT4o stop preferencing context data over prompt

GPT-4 tends to prioritize context data when constructing responses, often over the actual request or prompt. This can be demonstrated by:

  1. Requesting GPT-4 to write a short story.
  2. Asking a follow-up question about the story, such as, “Is this aspect realistic?” or “Why does this event occur?”

GPT-4 typically answers the question based on the story’s content. However, GPT-4o often misinterprets the follow-up question as a request to rewrite the story, altering the mentioned features. This likely happens because GPT-4o weighs the context of the previous prompt-response cycle more heavily than the immediate request. Additionally, it seems to interpret everything through a specific lens.

Attempts to address this issue include using custom prompts to prioritize the immediate prompt or to classify prompts by their requirements. However, GPT-4o still tends to favor context and its interpretive lens over the custom prompts.

I need ideas to solve this issue. Discussing why GPT-4o behaves this way might help develop a better custom system prompt to ensure it performs as desired.