I’ve sent a prompt to text-davinci-003 as follows:
“Find 3 news articles from known media outlets. Give the title and URL in a markdown table”
Every URL given is incorrect. Not just a few, every single one of them! I’ve tested a ton of these URLs in both ChatGPT as well as the OpenAI API. Exact same result.
Before anyone tells me “the links expired”, (the model did exactly that), the news is recent and one of the sources is the BBC, which does not delete links. Every response includes 1 article the BBC. Other media outlets seem to rotate between NY Times, CNN, Reuters and the Guardian.
The issue isn’t the limited scope of “known media outlets” (this was can prompt to fix), the issue is that every single URL given is 100% incorrect and leads to either a 404 error and an unrelated article.
GPT models don’t have knowledge or memory in the traditional sense. These models are designed to generate most likely tokens of text given previous tokens aka prompt.
It makes it completely unreliable as I’m not able to cross-reference the answers. I have tried with research papers, technical (engineering) algorithms relating to known questions and answers and it is completely unable to back up anything it responds with.
Most of what I’ve prompted comes back correctly (especially mathematical formulas) but if it’s unable to give an exact source for the material, then it’s literally pointless using.
I mean no offense to anyone, but it’s literally the same as asking my buddy Dave a question, him getting it right (maybe?) with absolutely no backup for what he said.
Without a way to cross-reference where it got the information from, it’s literally no different to pub-talk. Some of the responses may be correct, some may not. There’s literally no way of telling, if it cannot provide the source of the, what I have to refer to as, an assumption of an answer.