ChatGPT Providing Broken or Outdated Source Links

Is anyone else having an issue with outdated or broken links when requesting linked sources for information from ChatGPT? If so, have you found a way around this?


ChatGPT is pre-trained, which means the information it has access to won’t be up to date. I believe ChatGPT only has data up to 2021.

ChatGPT rarely provides URLs that work. Other models that have access to the internet might be suited to this task.



ChatGPT provides links by its knowledge. It does not have knowledge of current world, afaik it has knowledge around 2021. Changes happened after that is not something gpt knows of.

1 Like

Well, that’s good to know. Thank you!

It’s more than just the 2021 cutoff. GPT makes up URL’s that never actually existed. They didn’t even exist in 2021

It’s one of the traits of a large language model. If you ask GPT about hallucination, it might explain it to you (or it might make stuff up) - otherwise Google “AI Hallucination effect”


Hi @mpmpfeffer

To add to the excellent reply by @raymonddavey, let me elaborate.

The underlying large language models do not store links as “references” because the models are not “reference models” they are “language models”. This means all the many billions of pieces of data used in the pre-trained language models are used to predict the next sequence of text based on a prior sequence of text.

You @mpmpfeffer are making the common mistake of assuming that a language model is a reference model or expert system.

This is also why @raymonddavey correctly points out that these models “just make things up” which is a way of saying that these models can create URLs and Links (any text) out of “thin air” by predicting what a URL reference is, as a language model, not as an accurate reference model.

This is not accurate, @krisu.virtanen. ChatGPT has no “knowledge” it is a language model designed and trained to predict sequences of text and so it has no “knowledge” of anything except predicting some text based on some prior text.

This is not “knowledge” it is only “data” using to predict natural language sequences of text. There is no actual “knowledge” and ChatGPT “knows nothing”, ChatGPT is a fancy text auto-completion engine, generating text based on user prompts.

There is a very large and significant technical difference between “data” and “knowledge”. It important not to use technical terms like “knowledge” in place of “data”. GPT models are pre-trained language prediction models based on the body of data in the “April 2021” (need to confirm it was April) time frame. Fine-tuning these models can bring in new data, but this data will be used for predicting natural language text, and is not “knowledge” per se.

Hope this helps.


Hello @ruby_coder

I really appreciate your answer about this topic, and come to my mind some questions.

I’m a new user of this AI. Testing everyday with different topics, and helped me with a lot of them.

ChtGPT is a good tool for me, to study technical topics?
I was using as my 24/7 professor.

Ex: I was trying to understand about Serverless Architecture, and it gave me a nice explanation about it, and did an analogy about that made things very clear.

Should I trust in ChatGPT answer?

The way you said, it appears to be a text predict from my iPhone with more data hahahaha

Hi @usantos

ChatGPT and does not generate “facts” as much as it generates “text” by predicting the next sequence of text based. So, it’s useful of course, in the hands of someone who possesses a “speculative mind” and dangerous to people are “gullible” to a confidently sounding chatbot making things up as they go along.

You should never trust anything generated by ChatGPT which requires technical accuracy. You MUST verify everything. OpenAI has been clear about this as well.

That is exactly what ChatGPT is. ChatGPT does not “appear” to be a text prediction engine, it is a text prediction engine. It just happens to be based on a much larger language model and it happens to be more sophisticated from an engineering perceptive.

The problem is that is appears that most ChatGPT users want to believe and trust ChatGPT and they are using this untrustworthy text-prediction engine as an expert system AI, and it is obviously not an expert system.



1 Like

I mainly use ChatGPT for research purposes and regularly ask it for source links and they NEVER work. I independently Google similar terms and can never find anything to match so it leaves me questioning everything - even though the information is more-or-less accurate.

I asked ChatGPT if it was making the links and information up and it said: “I apologize for the continued issues with the Nielsen article link. It appears that there may be a problem with the Nielsen website or server. I can assure you that I am not making up the link or the information, as the Nielsen article was a valid and reliable source for on-premise drinking trends when it was published in 2021.”

I am so tripped out by this! So it is making up links even though it says its not??


Tip: when you are communicating with an A.I. and want to know if it’s model is up to date… ask it what day it is.

Yes, he invented the articles, including title, volume, issue and page numbers in know journals.
It’s not only broken links.

1 Like

Hi, I also use it for research purposes. I have found that it is good for summarising text, and can even find the proper published articles online - however when you ask for the proper reference it almost never finds it.

I usually ask it to summarise an article, but I have the article open beside it to I can check everything it is saying and that its summary matches what is actually in the text.

I also use it for R programming (which is a statistics package) and it has been great for that. you can continue to add new information and ChatGPT will work its way round anything new and come up with new code. every now and then it attempts to give you really complicated code though, you can just tell it to give you less complicated code and it does.

Care to give an exmaple. Please show actual prompts and actual completions of complicated code and then the same prompt and completion with the less complicated added to the prompt.

I ask because I mainly use ChatGPT for progamming with Prolog and if the code is what similiar to what I would have expected it often can not be simplified. So curious as to what changes when asking for such.


This is really off-topic so if you want I can post it as a new topic with a link to your reply.

Yes - this was my experience. I asked for a list of references about a certain rather obscure topic, and it made them up! It had the right components - authors who are noted in the field, real journals, even links to journal and - but the citations were all completely bogus!!

Yes. absolutely. every link given to me by chatgpt is BS

Yes. i’m currently doing educational theory research. I’ll ask about Foucault or Marx or Piaget or whichever theorist (information which is decades if not hundreds of years old). When I get a response, that mostly is wrong, I’ll need to double check information. So ask for citation - “where you getting this information from???”. Give me citations to this info.i then get links that are random to journal articles which have nothing to do with what was just discussed.

To dig into this issue further, I did what another comment suggested above. I pointed to a link that contains government legislation dated 2006, and this is the response I just got

"I apologize for the confusion, but as an AI language model, I don’t have the ability to access or review specific articles or documents in real-time. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data up until September 2021. To review the article you provided, I recommend visiting the link and reading through the content yourself. ".

Something’s obviously broken. The link is old publically available data.

I’ve cancelled my subscription and returning to Google…