Every URL is response is wrong. Every single one

I’ve sent a prompt to text-davinci-003 as follows:

“Find 3 news articles from known media outlets. Give the title and URL in a markdown table”

Every URL given is incorrect. Not just a few, every single one of them! I’ve tested a ton of these URLs in both ChatGPT as well as the OpenAI API. Exact same result.

Before anyone tells me “the links expired”, (the model did exactly that), the news is recent and one of the sources is the BBC, which does not delete links. Every response includes 1 article the BBC. Other media outlets seem to rotate between NY Times, CNN, Reuters and the Guardian.

The issue isn’t the limited scope of “known media outlets” (this was can prompt to fix), the issue is that every single URL given is 100% incorrect and leads to either a 404 error and an unrelated article.

Why is this?

Hi @astra

Welcome to the community.

Consider reading ChatGPT General FAQ | OpenAI Help Center

GPT models don’t have knowledge or memory in the traditional sense. These models are designed to generate most likely tokens of text given previous tokens aka prompt.

1 Like

links usually give 404 “not found” but if you ask for “official documentation” then link might be good

1 Like

You can occasionally get it to provide working links, but for the most part they should not be trusted.

These hallucinations are a common problem in AI.

2 Likes

Thank you. I tried this, but still no-go.

That’s one helluva “hallucination” !

It makes it completely unreliable as I’m not able to cross-reference the answers. I have tried with research papers, technical (engineering) algorithms relating to known questions and answers and it is completely unable to back up anything it responds with.

Most of what I’ve prompted comes back correctly (especially mathematical formulas) but if it’s unable to give an exact source for the material, then it’s literally pointless using.

I mean no offense to anyone, but it’s literally the same as asking my buddy Dave a question, him getting it right (maybe?) with absolutely no backup for what he said.

1 Like

Without a way to cross-reference where it got the information from, it’s literally no different to pub-talk. Some of the responses may be correct, some may not. There’s literally no way of telling, if it cannot provide the source of the, what I have to refer to as, an assumption of an answer.

I get good results when searching for technical information like AWS and programming

Correct.

Welcome to generative AI using a large language model.

This is simply “how it is” with predictive generative AI.

1 Like