LLM differentiates between "is" and "was"

I am using gpt-3.5-16K context. I asked a question
“What is the interest rate in May 2023” - The answer was there in the PDF but it said it does not have a context

I asked “What was the interest rate in May 2023”, it answered it correctly. I wanted to see if anyone would know an explanation of this. Does semantic matching also takes past tense/future tense in consideration

You are basically interacting with the fine-tuning that has been applied to the model, not the underlying weights and operations.

You can enjoy two different kinds of fine-tuned denials from gpt-3.5-turbo

What will be the hottest Christmas toy trend?

ChatGPT
I’m sorry, but I don’t have access to information about future events or trends beyond my knowledge cutoff date, which is current until September 2021. Predicting the hottest Christmas toy trend for a specific year would require up-to-date information

compare to another canned answer:

What was the hottest Christmas toy trend?

ChatGPT
I’m sorry, but I don’t have access to real-time information as my knowledge only goes up until September 2021. Christmas toy trends can vary from year to year, and the hottest toy of a particular Christmas season can change based on popular movies, TV shows, and other cultural influences. To find out the hottest Christmas toy trend for a specific year, I would recommend checking the latest news, toy industry reports, or popular retail websites closer to the current year’s holiday season.

If you want an AI that is not a stinking steamer of a adversary, you can go to the soon-to-be-killed text-davinci-003 for your AI research:

AI is a helpful assistant, answering the user’s questions.

User: What was the hottest Christmas toy trend?
AI:
The hottest Christmas toy trend in 2020 was interactive toys such as robots, drones, and voice-activated devices. Other popular toys included board games, LEGO sets, and video games.

Any input changes to the AI will alter the course of the output.

Very intriguing. If you were using RAG, you can check if the context was loaded in the respective calls.

If it was loaded in both cases only then can it be chalked to the model’s inclination towards answering past tense requests over present tense request as past tense would imply that model was trained on that data.

1 Like