I seem to be getting a sudden and significant spike in hallucinations for ChatGPT-4o when I ask it for specific questions. This has happened a few times recently, where I’ve asked it very specific questions (thinking I had it stored in memory), but instead only to realize it completely made things up with confidence if it didn’t have the information stored.
Here’s an example - I’ve never, ever told it I had this brand of smart lock or any smart lock (because I don’t). Yet, it made up a specific answer rather tha saying it didn’t know.
Same with other things completely different, like asking re: a functionality in a software, or previous conversation topics. It will straight up make things up like it was Gemini.
n