ChatGPT getting a bit worse and a weird bug

Hi everyone,

I’m not a developer, just a user pretty interested in AI (maybe I’ll even pick it as a course for University), but lately, with my light use of ChatGPT (non-Plus), I’ve run into one very big and weird bug and a few “typos” and grammar errors.

“Silly typos; first-grade grammar errors; very weird but funny bug; generally dumber answers”.

The typos are weird sometimes.
They seem like words that are close enough to a real one, but even corrected would feel out of place in the sentence.

The grammar errors are often so minor that they feel even weirder.
I’m Italian, so I use ChatGPT in Italian. In our language, we have many articles, definite and indefinite articles. The indefinites are “Un, Uno, Una”, one exception exists with “Una” (being a female article and being used with female names), if the next word starts with an A, you cut the A in the article, put an apostrophe and connect the two (Eg. it’s not “Una anatra” but “Un’anatra”), the rule doesn’t apply to the male version “Uno” because there’s already “Un”.
This is a very little grammar rule, so little that it’s taught in first or second grade… but lately, ChatGPT missed this small detail a few times.

The bug, on the other side, was not small.
I admit that the prompt was very dumb, just to have fun with a friend and make a meme… I know, even I feel bad for wasting an AI to do such a thing :laughing:, but still…
The first two or three attempts were fine (actually, the first answer it gave me was spot on, then the following were not, and it couldn’t replicate the first even when telling explicitly to do so), but then in the last one went rogue and this happened:

First it kept on typing “èèèèèèèèèè…”, then a code appeared, and it was flashing colours while it was typing it, then a few more random texts, then more code… I admit it was a bit offputting but fun to watch… and later I also noticed that the name give to the conversation was weird, first it was in Spanish… and it means “The objectives of the therapy” which has nothing to do with the request I made.

Besides all of these more or less fun things, I’ve noticed that the answers are generally worse, more repetitive, and less interesting, formulation is simpler and less clear, and with much fewer details on the topics… now, I don’t know if there’s been an evolution in the algorithm or just a scale-down of the whole thing to cut costs for non Plus users… but I’d be curious to hear others’ opinions and maybe some insights on the bug, would be very interesting to know what happened there!