the response of the request by completation from: “wetter hamburg” was a list of very strange erotic-items. Looks like compromised content.
Did someone else have the same or similar problem?
Does anyone know the reason for this behavior?
The situation is not reproducible
Thanks for Ideas
You have to keep in mind that the models will simply try to come up with text that might logically follow your prompt, and depending on the data in the model, that could very well be anything. The prompt “wetter hamburg” is rather vague, and it seems like you hit a random seed where the model thought certain undesirable content should follow it. Perhaps it found such information in some keyword cloud on some website during its training, or maybe it was just drunk.
I’m no expert by any stretch of the imagination, but you will likely get much better and more reasonable completion results by providing a prompt that can be completed better. For example, you could prompt saying “Hier ist das Wetter in Hamburg:”, which I would wager should provide perfect results.
If you’re in a situation where a user might not know about what’s going on behind the scenes, and they simply ask for the weather by saying “wetter hamburg”, you might need to get a little more creative or even pre-process requests. For example, a viable prompt might be: “Ein Nutzer hat über Informationen zu ‘wetter hamburg’ gefragt. Hier ist die Antwort:”.
And if you simply want to make sure no inappropriate content ever makes it into the response, you could also run it through the OpenAI moderation API to check for problematic results.