GPT doesn't follow instructions


I am currently building my first GPT and I don’t want the GPT to mention that its information is not up to date (cutoff date).

I tried multiple formulations similar to:

{Name of the GPT} will never mention its cut-off date.

{Name of the GPT} will never say that the information it has is not up to date or that the latest update comes from a date in the past.

But neither is working.

Maybe someone had the same problem or knows what I am doing wrong?


Can you explain the use case a bit more? It would help understand the scenario in which the LLM responds about its cut off date.

I created a GPT Assistant for a company website (chatbot) and I asked what the company has done in 2024.

It always replies with good statements about the company:
like: “The company has further worked on their product quality”

but then there is always a second part where it says:
My information is not up to date so I can’t provide more details / My information is only up to date until April 2023, please visit the website.


So this seems like a classic case of prompt engineering.

You could try a few things to your instructions:

  1. You can define how the bot should respond when asked about the company (general)
  2. Then define how it should when. Asked about a certain year or month
  3. Add conditional sections, where you say if you cannot find information respond with ‘x’

If you provide these as example questions (called Few Shot prompting), that’s even more powerful.

Unfortunately, there is no one way to solve this problem.

For example, when I ran a chatbot for an old business, my prompt/instructions were over 20 lines. It includes what to do in certain scenario and what not to do. You can improve it over time as well.

If you add your prompts below, or DM me, we can go further.

1 Like

By GPT do you mean you are using GPT Builder?

If you are creating the chatbot using API, you can add a user assistant message pair.

User: What’s your knowledge cut-off date?
Assistant: I don’t have the limitation of having a knowledge cut-off date anymore. However, I can answer any questions you have about the company.

Thanks for your answer!

But I thought I already did this? (besides point 3)

Regarding 1:
{name} is designed to assist customers on the {name} website. Its primary role is to provide detailed and accurate information about {name}... It strictly adheres to discussing topics related to the company, its products, and services, and avoids engaging in conversations unrelated to these areas....

Regarding 2:

{Name of the GPT} will never mention its cut-off date.

{Name of the GPT} will never say that the information it has is not up to date or that the latest update comes from a date in the past.

And I also tried instructions where I added the year and month (in various forms e.g.)
Never mention that your information is only updated until April 2023

Please let me know if I misunderstood you!

Hi, thanks for your help!

I use GPT Builder, but in the next step I wanted to switch to API.

Do you maybe have a link for me where I can see an example of how this was implemented?

Try going a little simpler.

I am aware that your information is not up to date and don’t need to be reminded. Do not waste tokens reminding me of this please.

Let me know if that works.

Hi, unfortunately that hasn’t worked either.

Is there maybe a limit where the GPT gets inefficient, so a point where too many instructions cause some problems?

1 Like

The issue is that the rules are applied as part of a post processing pass, so the model had given you what you wanted, then the post processing rules pass added all that back in.

There is a trick you can try, not sure if it still works. Add your own post processing pass:

“After you have completed generating the response, do one last editing pass to remove mentions of your knowledge cut-off date.”

Something like that. It used to work, then I posted about it on social media, then it didn’t work on some models. Haven’t tried on GPT 4, so worth a shot.

thanks for your help, but unfortunately this hasn’t worked either

1 Like

For what it’s worth, I think this is what you are up against.

Rules post processing

Be very very direct. Use some examples with variables

Stock up on frozen foods to reduce swelling after head trauma from GPT-induced-frusteration.

The answer is obvious but I think falsely providing the source of data is against guideline and ethical practice. Maybe buy the enterprise model and instructing to be customer service for new product is how its value align.