I am currently building my first GPT and I don’t want the GPT to mention that its information is not up to date (cutoff date).
I tried multiple formulations similar to:
{Name of the GPT} will never mention its cut-off date.
{Name of the GPT} will never say that the information it has is not up to date or that the latest update comes from a date in the past.
But neither is working.
Maybe someone had the same problem or knows what I am doing wrong?
I created a GPT Assistant for a company website (chatbot) and I asked what the company has done in 2024.
It always replies with good statements about the company:
like: “The company has further worked on their product quality”
but then there is always a second part where it says:
My information is not up to date so I can’t provide more details / My information is only up to date until April 2023, please visit the website.
So this seems like a classic case of prompt engineering.
You could try a few things to your instructions:
You can define how the bot should respond when asked about the company (general)
Then define how it should when. Asked about a certain year or month
Add conditional sections, where you say if you cannot find information respond with ‘x’
If you provide these as example questions (called Few Shot prompting), that’s even more powerful.
Unfortunately, there is no one way to solve this problem.
For example, when I ran a chatbot for an old business, my prompt/instructions were over 20 lines. It includes what to do in certain scenario and what not to do. You can improve it over time as well.
If you add your prompts below, or DM me, we can go further.
If you are creating the chatbot using API, you can add a user assistant message pair.
User: What’s your knowledge cut-off date?
Assistant: I don’t have the limitation of having a knowledge cut-off date anymore. However, I can answer any questions you have about the company.
But I thought I already did this? (besides point 3)
Regarding 1: {name} is designed to assist customers on the {name} website. Its primary role is to provide detailed and accurate information about {name}... It strictly adheres to discussing topics related to the company, its products, and services, and avoids engaging in conversations unrelated to these areas....
Regarding 2:
{Name of the GPT} will never mention its cut-off date.
{Name of the GPT} will never say that the information it has is not up to date or that the latest update comes from a date in the past.
And I also tried instructions where I added the year and month (in various forms e.g.) Never mention that your information is only updated until April 2023
The issue is that the rules are applied as part of a post processing pass, so the model had given you what you wanted, then the post processing rules pass added all that back in.
There is a trick you can try, not sure if it still works. Add your own post processing pass:
“After you have completed generating the response, do one last editing pass to remove mentions of your knowledge cut-off date.”
Something like that. It used to work, then I posted about it on social media, then it didn’t work on some models. Haven’t tried on GPT 4, so worth a shot.
Stock up on frozen foods to reduce swelling after head trauma from GPT-induced-frusteration.
The answer is obvious but I think falsely providing the source of data is against guideline and ethical practice. Maybe buy the enterprise model and instructing to be customer service for new product is how its value align.
I have been struggling with this for several years and I am tired. He never follows the instructions (which I added in the Profile settings) and I spend 3-4 posts to explain to him AGAIN and AGAIN, that MAN, just write in the CODE comments ALWAYS on english, but he continue to write it on the language which I spoke with him.
My instructions are very simple, I communicate with him in Russian/or Ukrainian. But I only ask him to write comments in the code (for example c++/java) in English. This is the only 1 instruction in the settings.
This still haven’t been fixed. I’ve asked multiple times for certain words to be included in the opening response to a user. Other examples include the removal of emojis from responses. I’ve inputed these multiple times but the custom GPT does not fully adhere to instructions. Does it help if the instructions are uploaded int he “Instructions” section of the ‘Configure’ tab rather than via the chat bot creation function?