Why does ChatGPT give exactly the same response to some prompts?

I have never encountered this before, but ChatGPT (with GPT-4) behaves as I would have set its temperature to zero – but I didn’t. I just entered the following prompt ein hund kam in die küche (a dog came into the kitchen.) and ChatGPT did consistently answer more than 30 times – word by word, token by token, letter by letter – exactly the same:

Okay, was möchten Sie wissen oder besprechen, das mit dem Hund in der Küche zu tun hat?

(Okay, what would you like to know or discuss that has to do with the dog in the kitchen?)

As I have said, I can reproduce this reliably, either during one chat by clicking “Regenerate” or in different chats. (One single time ChatGPT responded by Okay, was möchtest Du wissen oder besprechen, das mit dem Hund in der Küche zu tun hat? which has the very same translation into English.)

How can it be that there is (almost) not the slightest variation in this completion?

When I enter ein hund kam in die, the completions vary (quite expectably and comprehensibly):

Es scheint, dass Ihr Satz unvollständig ist. Könnten Sie bitte den Satz vervollständigen, damit ich Ihnen besser helfen kann?

Es scheint, dass Ihre Nachricht abgeschnitten wurde. Könnten Sie bitte den vollständigen Satz oder die Frage formulieren?

Es sieht so aus, als wäre Ihre Nachricht unvollständig. Könnten Sie bitte mehr Kontext oder Informationen geben?

Adding " und" finally helps him out: ein hund kam in die küche und is answered by stahl ein Stück Fleisch vom Tisch (stole a piece of meat from the table)

I have to correct my final remark: It has nothing to do with capital letters. When prompted with Ein Hund kam in die Küche (the correct spelling) ChatGPT completes the same (with very, very rare variations):

It’s very likely that there’s much less training data on German language than English language, and even less on improperly capitalized German, so it may be that the model has no other reasonable predicted output for that situation.

Thank Jon, sorry for having given a misleading final remark: It seems to have nothing to do with improperly capitalized German.

GPT as a LLM is trained on a foundation of text completion.
by relation of words as they come next (or most requently come next)

However transformers and attention changed this; But remements of their evolution remain - They will respons the same to prompts that they dont understand* (i use understand loosely)

But when they do understand, they are able to rephrase and reiterate.

Thats the closest we get to AI understanding words - but still… the most effective.