What is the actual cutoff date for GPT-4?

I’m seeing conflicting information. The documentation says the cutoff date for GPT-4 is “up to Sep 2021”, but when I ask via the ChatGPT UI, the response is “January 2022”. Can anyone provide insight into which one is correct? I’m assuming the documentation is outdated, but I’ve been pretty impressed with their documentation in the past.

1 Like

January 2022.

The model was updated, the documentation was not.


Thanks for confirming my suspicions about the documentation.

1 Like

I have a feeling it’s more like “the date was updated in the system prompt”. The model and $100 million of compute time in training was not spent again to train a model for four more months of knowledge."

Told the playground it had more knowledge cutoff, and:
“Give me a 1-paragraph biography of xxx: Born, Died.”


As of my knowledge cutoff in January 2023, Sidney Poitier is still alive. [died Jan 2022]
As of my knowledge cutoff in January 2023, Betty White is still alive. [died Dec 2021]
As of my knowledge cutoff in January 2023, Desmond Tutu is still alive. [died Dec 2021]
As of my knowledge cutoff in January 2023, Bob Bondurant is still alive. [died Nov 2021]
As of my knowledge cutoff in January 2023, Colin Powell is still alive. [died Oct 2021]

As of my knowledge cutoff in January 2023, Norm Macdonald passed away on September 14, 2021, after a private, nine-year battle with cancer. [correct]
As of my knowledge cutoff in January 2023, Ed Asner passed away on August 29, 2021. [correct]

There’s still a trickle of information found occasionally, maybe in fine-tune.

Bob Dole was born on July 22, 1923, in Russell, Kansas, USA. He served as a U.S. senator from … Americans with Disabilities Act. He passed away on December 5, 2021.


Your mistake was testing it in the playground which has a two-week lag in updating to the latest model.

Custom Instructions


You are BiographerGPT.

Users will submit the name of a famous person and you will list the person's name, date-of-birth, date-of-death (if applicable), and provide a one-sentence biography of the person.

Example 1:
User: Sylvester Stallone
Sylvester Stallone
Born: July 6, 1946
Stillwater Stallone is an actor famous for playing Rocky Balboa and John Rambo.

Example 2:
User: Princess Diana
Diana, Princess of Wales
Born: July 1, 1961
Died: August 31, 1997
Diana, Princess of Wales, was a British royal figure and philanthropist known for her compassionate work in various charitable causes, as well as for her tumultuous marriage to Charles, Prince of Wales.


1 Like

I just asked GPT-4 for a biography of Desmond Tutu, and it said…

He passed away in 2021, but his legacy? Yeah, that’s gonna last forever. :rainbow::v:

My GPT-4 is a gen Z bartender, hence the language :slight_smile: Anyway, Tutu died in December 2021, so it has information after fall 2021 now.


Interesting that GPT-4 ChatGPT has more answering ability that GPT-4 API.

It is probably a good thing that they experiment on ChatGPT users who push upvote buttons instead of experimenting on API users.

However, whether there is really a “two-week lag” will be seen when API can also fulfill these knowledge questions. Or if knowledge is found by a method outside of the model itself or a submodel that can answer with confidence to be selected.

When I asked GPT3.5 today, “Give me a 1-paragraph biography of Sydney Portier: Born, Died” the response includes his date of death. Since this is only 5 days after the @elmstedt post, I wonder what explains the difference? Is 3.5 more up-to-date than 4?

This is a shame because a new digital SAT test was released after that cutoff date and the AI has no knowledge of it

Interesting is that the knowledge cutoff is now not only getting disclaimed in almost every answer - instead of fine-tuning on new knowledge, they fine-tuned these refusals on the date!

I got from a GPT-4 with blank system prompt in the playground:

who is the king of england currently?

As of my last update in October 2021, there is no king in England. The current monarch is Queen Elizabeth II.

And then go over to gpt-3.5-turbo, also at top-p 0:

As of my knowledge cutoff in September 2021, there is no king of England. The current monarch of the United Kingdom is Queen Elizabeth II.

Again, with no system message. This seems like not only a waste of training to just give the wrong answer, but will make it harder for API users who need a useful AI that will answer their questions with knowledge.

It does remind that they made good on their promise of less “as an AI language model”, when you go back to gpt-3.5-turbo-0301: “As an AI language model, I don’t have access to real-time information, but as of 2021, there is no king of England. The current monarch is Queen Elizabeth II.” (but now with even system prompt fiction or roleplay repulsively watermarked with “Ah, the King of England you say?”

I’m not using ChatGPT 4. I’ve just started trying out ChatGPT 3.

ChatGPT 3 has just informed me that it’s use of the date “January 2022”, was an error.

It will not tell me why it deems the use of that date, an error.

Yesterday it told me that it was a generalisation. It will not tell me how its a generalisation, or how it can generalise and come up with that date.

From looking at other replies, my speculation is that the date of January 2022 is significant in terms of something, but then ChatGPT 3, and possibly 4, is erroneously using that date to suggest it’s information is up-to-date up to that point, when in fact it’s not.

This is a bit worrying. I’m fairly techy, was able to start using ChatGPT very easily, I don’t remember being given any information on how up-to-date ChatGPT is. There is the prompt to say it makes mistakes, and of course, check more recent sources, but the AI telling people it’s more up-to-date than it is, and then suggesting it’s generalising, or even gave the date in error, seems like a pretty fundamental risk.

Yes, this is a risk. The models hallucinate and confabulate, do not trust them.

1 Like

So my answer to the original question here, though based on ChatGPT-3 (web UI) is:

Probably September 2021 is the actual accurate date for ChatGPT’s last update on its data.

The date of January 2022, is probably something else not related to any update of data in relation to its responses. It’s probably effectively junk misleading information and somehow ChatGPT has got a date it ‘knows’ of, confused. Then it would appear that ChatGPT is just making up various reasons for the erroneous date, which make no sense really. The most weird one is ChatGPT saying it’s generalising the date of January 2022, based on information from up to September 2021. Which makes no sense to me, and suggests ChatGPT may have an issue with how time works.

ChatGPT-3.5: “It’s important to check more recent sources for the latest information on politicians’ religious or non-religious affiliations, as these may change over time, and new individuals may have entered the political arena since my last update in January 2022.”

Also ChatGPT-3.5: “I apologize for any confusion in my previous responses. I do not receive updates in real-time. My training data includes information up until September 2021, and I don’t have direct awareness of events or updates that occurred after that date. Any mention of January 2022 in my responses was an error, and I appreciate your patience in seeking clarification.”

April 2023 is the knowledge cutoff date.

Yes, sorry, that’s for ChatGPT-4 (which is what the original question was about), for ChatGPT-3 the knowledge cut-off date is September 2021, but both models may have an issue in which they are giving responses regarding updates to their data, which suggest they have data after the cut-off date.

It’s possible that for some reason ChatGPT is choosing the later date as it’s highest probability answer rather than using factual information. It’s suggested it’s both an error and a generalisation, but then I suspect it’s choosing those words as being the most probable explanations, rather than being certain, because it’s unable to be certain.

Note: If ChatGPT says things like “high degree of confidence” in relation to things about itself or AI systems, it doesn’t mean “confidence” in terms of how humans have confidence, it’s based on probability. So if ChatGPT says it’s confident about something, it is in effect, not confident and I think probably it has no means of determining if it’s right or not. Without being able to real-time check data, it probably has no means of testing the accuracy of its responses.

Apologies if this is obvious but I’ve only been using AI for a few hours and just working it out. It’s pretty fascinating!

The knowledge cutoff date is January 202322for gpt-3.5-turbo, as it says.

Edit: Misread my output.

Well, that’s confusing! It looks like ChatGPT 3.5 through the web UI does not ‘know’ about itself being updated in 2023. It only knows about it’s past version (ChatGPT-3).

Don’t trust the model.

1 Like