Hi All, has anyone ever had their prompt in the playground refused? This happened to me for the first time recently and so I wondered how rare it is. It flagged my prompt as as a questionable input that it would not give a response to. [Edit: I should mention that this is different from the output flag that produces text in response and then highlights in red.] I’m wondering how rare this is, and whether anyone else has run into it before. In my case it was a pretty harmless prompt (or so I thought). I had an “artistic expert” responding in a conversation on how to make a pinata. Things were going quite well, with the expert first learning what the pinata was supposed to achieve, then delving into details of how to construct it, including using paper mache. …before grinding to a halt. It wasn’t particularly problematic as I just backed out to a few exchanges earlier in the conversation, but it’s still quite mysterious to me on why it got flagged, and how common that is.
I’ve had a false flag in Playground that wouldn’t let me continue until I got rid of the offending word, but it’s only happened once. I wondered if I’d hit a certain number of warnings or if was the individual word that was blocking it. Once I removed that word, it went through fine. I was using a lot of punctuation, though, which might’ve confused it even more?
From what I know, the team at OpenAI is working on improving the filter. They currently have it erring on the side of caution which might be why some of these edge cases slip through.
Hope that helps!
Thanks for the reply. Yes, maybe it was just some kind of false detect on their filter. I wish I’d have kept the offending prompt to see if there was a particular word that might have triggered it. In retrospect I wish I would have done more experimenting with it to see what was the cause.
No problem. There’s always next time.
Noticed it again, and figured it out. I think it’s because the bot in this case (financial advisor) asked about family situation. Any mention of kids/children seems more likely to trigger the filter…But asking about family “dependents” instead seems to help with no issues in that case.