Subject: Feedback on Recurring Error in AI Response

Dear Support Team,

I would like to report a recurring issue I’ve encountered with the AI’s responses. Specifically, the AI has been incorrectly referring to Donald Trump as a “former U.S. President,” even though he is currently serving as President of the United States after his second inauguration in January 2025. This error has occurred multiple times in my interactions (at least four instances) and is creating confusion in the context of our conversation.

While I understand that AI is a complex system, I believe it is crucial for such factual inaccuracies to be addressed, especially when they relate to real-time political events. Given that the error has happened repeatedly, I suggest that it be looked into and corrected to ensure more accurate and up-to-date responses.

Thank you for considering this feedback. I hope it can help improve the AI’s performance.

Best regards,
Sylvestre Conceicao

1 Like

What version are you using? Each model is trained on specific data sets. earlier models like 3.5, and early versions of 4 would not just ‘know’ who the current president is, they would assume things based on the data sources they were trained on.

1 Like

This stuff is not magic!

LLMs are trained up to a specific date and are not updated regularly, let alone in real time!

You cannot expect them to know the future (from their perspective.)

This is a basic limitation.

One strategy is to use search and an appropriate prompt to encourage the LLM to use it.

Thank you for the information! This gives me a better understanding of AI’s knowledge base. Just to be clear, since your AI is trained up until June 2024, does that mean you’re up-to-date with all events and developments until this time?"

1 Like

Think of an LLM as a highly (!) compressed and slightly unreliable store of all (well a lot of!) text up to the training date.

During the training process things can become more fuzzy. It is lossy.

It is fallible and prone to a little randomness. You cannot fully guarantee anything it says is correct. However you can reduce the error rate significantly through prompting strategies like RAG (including using tools like search)

Everything that it writes is a context appropriate randomised mashup of things that came before but may not be regurgitated in quite the way you expect.

Do not consider anything it says as gospel and always check answers against reliable sources.