ChatGPT Providing Broken or Outdated Source Links

Is anyone else having an issue with outdated or broken links when requesting linked sources for information from ChatGPT? If so, have you found a way around this?


ChatGPT is pre-trained, which means the information it has access to won’t be up to date. I believe ChatGPT only has data up to 2021.

ChatGPT rarely provides URLs that work. Other models that have access to the internet might be suited to this task.



ChatGPT provides links by its knowledge. It does not have knowledge of current world, afaik it has knowledge around 2021. Changes happened after that is not something gpt knows of.

1 Like

Well, that’s good to know. Thank you!

It’s more than just the 2021 cutoff. GPT makes up URL’s that never actually existed. They didn’t even exist in 2021

It’s one of the traits of a large language model. If you ask GPT about hallucination, it might explain it to you (or it might make stuff up) - otherwise Google “AI Hallucination effect”


Hi @mpmpfeffer

To add to the excellent reply by @raymonddavey, let me elaborate.

The underlying large language models do not store links as “references” because the models are not “reference models” they are “language models”. This means all the many billions of pieces of data used in the pre-trained language models are used to predict the next sequence of text based on a prior sequence of text.

You @mpmpfeffer are making the common mistake of assuming that a language model is a reference model or expert system.

This is also why @raymonddavey correctly points out that these models “just make things up” which is a way of saying that these models can create URLs and Links (any text) out of “thin air” by predicting what a URL reference is, as a language model, not as an accurate reference model.

This is not accurate, @krisu.virtanen. ChatGPT has no “knowledge” it is a language model designed and trained to predict sequences of text and so it has no “knowledge” of anything except predicting some text based on some prior text.

This is not “knowledge” it is only “data” using to predict natural language sequences of text. There is no actual “knowledge” and ChatGPT “knows nothing”, ChatGPT is a fancy text auto-completion engine, generating text based on user prompts.

There is a very large and significant technical difference between “data” and “knowledge”. It important not to use technical terms like “knowledge” in place of “data”. GPT models are pre-trained language prediction models based on the body of data in the “April 2021” (need to confirm it was April) time frame. Fine-tuning these models can bring in new data, but this data will be used for predicting natural language text, and is not “knowledge” per se.

Hope this helps.

1 Like

Hello @ruby_coder

I really appreciate your answer about this topic, and come to my mind some questions.

I’m a new user of this AI. Testing everyday with different topics, and helped me with a lot of them.

ChtGPT is a good tool for me, to study technical topics?
I was using as my 24/7 professor.

Ex: I was trying to understand about Serverless Architecture, and it gave me a nice explanation about it, and did an analogy about that made things very clear.

Should I trust in ChatGPT answer?

The way you said, it appears to be a text predict from my iPhone with more data hahahaha

Hi @usantos

ChatGPT and does not generate “facts” as much as it generates “text” by predicting the next sequence of text based. So, it’s useful of course, in the hands of someone who possesses a “speculative mind” and dangerous to people are “gullible” to a confidently sounding chatbot making things up as they go along.

You should never trust anything generated by ChatGPT which requires technical accuracy. You MUST verify everything. OpenAI has been clear about this as well.

That is exactly what ChatGPT is. ChatGPT does not “appear” to be a text prediction engine, it is a text prediction engine. It just happens to be based on a much larger language model and it happens to be more sophisticated from an engineering perceptive.

The problem is that is appears that most ChatGPT users want to believe and trust ChatGPT and they are using this untrustworthy text-prediction engine as an expert system AI, and it is obviously not an expert system.



I mainly use ChatGPT for research purposes and regularly ask it for source links and they NEVER work. I independently Google similar terms and can never find anything to match so it leaves me questioning everything - even though the information is more-or-less accurate.

I asked ChatGPT if it was making the links and information up and it said: “I apologize for the continued issues with the Nielsen article link. It appears that there may be a problem with the Nielsen website or server. I can assure you that I am not making up the link or the information, as the Nielsen article was a valid and reliable source for on-premise drinking trends when it was published in 2021.”

I am so tripped out by this! So it is making up links even though it says its not??

1 Like

Tip: when you are communicating with an A.I. and want to know if it’s model is up to date… ask it what day it is.

Yes, he invented the articles, including title, volume, issue and page numbers in know journals.
It’s not only broken links.

1 Like