ChatGPT keep doing straight up hard refusals even when the content is non-explicit or even adult contents that’s supposed to be allowed. What is going on? A few weeks ago, it was fine, all adult contents and almost all the prompts were a free-flow, doing so fine. Then suddenly, hard refusals and hardcore hallucinations?
1 Like
I’ve had the same thing happen to me plus to not to mention more inaccurate information as well.
1 Like
the same thing, until about the same memory error, everything was relatively good, more freedom. now he rudely cuts off the slightest hints, before he could have suggested how to do it a little differently, now immediately and briefly sorry, I can’t. apparently, the next filters
1 Like
Now I’m also hearing they are choosing to save what memory, actually not the GPT but the system itself filtering out the memories to save what it went and phrase it how they want it - not sure if it’s due to something more (like trying to make a distance between AI and humans).