I’ve tried a handful of different prompts in the Playground that are all some version of asking for a video from YouTube or TikTok that is best at explaining some concept.
Example:
“Find the best YouTube video that is 5 minutes or less that explains to 5th graders how to multiply multi-digit whole numbers using the standard algorithm. Share the hyperlink and reason why knowing how to multiply multi-digit whole numbers is important in real life.”
The output consistently gives me 404 pages and/or weird URL formats. 2 examples below:
This is for ChatGPT, but it applies to the other models too, I believe…
Can I trust that the AI is telling me the truth?
ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.
We’d recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the “Thumbs Down” button.
The models will happily generate fake URLs, information, etc.
Please read the documentation. Large Language Models don’t have an internet connection. They process text and return the “most likely” results. You MIGHT be able to get it to suggest YouTube channels that were popular before the cutoff in Dec '21, but that’s the latest information it has. If you ask something too specific, like for the URL of a video that wasn’t extremely popular, it will happily hallucinate an answer for you. Like, if you ask it for the URL of Rick Astley’s “Never Gonna Give You Up”, it might make up something like “\https://www.youtube.com/watch?v=dQw4w9WgXcQ”
The model is trained in the natural language of scraped data on the internet, text, or other forms of media. It’s not a web browser; it can’t see links; it only has concepts of what a link is and maybe some data that has reference examples of how the link is formatted. GPT has no access to know if the link it “Generates” (Key word) is.
Another analogy, if you are a visual person, is that when you create an image with Dalle-2, Stable Diffusion, Midjourney, etc … if there is text present it is usually all junk. It sorta looks like text, it has the shape of text, but it’s not text in any human readable language. These AI models are basically ‘averages’ of what’s out there. And the average of a random chunk of text in a URL is just another average chunk of text that has no meaning. So the URL’s are bogus.