Eradicating irritating GPT-3 errors

If you ask GPT-3 when GPT-4 is launching, it comes back with this:

GPT-4, the fourth version of OpenAI’s Generative Pre-trained Transformer model, is scheduled to be released to the public in the second half of 2021.

Can someone (or a friendly bot) fix this? But also, is it possible to eradicate (train) these errors somehow?

There is no way for a GPT model trained with 2021 data to generate accurate text about based on a question about the future in 2023.

It’s simply impossible, and of course you will get GPT hallucinations if you do this. All GPT models will have the same hallucination problem.