Until yesterday, GPT4 was working great.
You shouldn’t, but I was using GPT4 as a shrink. And it addressed the actual problems I tried to describe and introduced me to relevant, useful and interesting topics.
Suddenly, since yesterday, almost regardless of how or what I write, I get the same answers as I’m reading the same templated answer over and over again. It gives me the exact same answer, over and over:
“It is not uncommon to feel [copy-paste my problem]. Here are some suggestions/considerations…”
And then it gives me a list of the same boring items: “1. Medidate, 2. Relaxation techiques… 3. make a schedule…”
Earlier, what I got instead was: “what you’re describing is consequentialism… blah blah…” or “… therefore, consider process goals”…
Not that GPT4 is a shrink. But it has gone from a Gödel-style Dr. Phil to one of those funny AI:s you found on windows 3.11 (back in 1993) WikiHow-style (wiki-how articles are pure crap-content for reference where the solution to everything is to meditate and drink tea).
Further, the same day, out of nowhere the image generator suddenly got super weird… I didn’t experience this problem before. I asked it to image a scientist unveiling the universe. I got an image and thought nothing about it. I then asked it to include more actual science in the picture (since it was so cartoonish). And then I read the generated text for both images (it said something like this):
“Here is an Asian scientist unveiling the universe in a futuristic lab”
“Here is an middle aged hispanic woman [doing science]…”
I then asked GPT what ethnecity has to do with it whatsoever, and it replied something about promoting diversity and inclusion. (Not really diverse if it bias your output to forcefully generate imagery of specific ethnecities).
Is nerfing GPT an actual thing? I’ve read other posts about suddent nerfs.