Magic words can reveal all of prompts of the GPTs

@Kaltovar I was able to crack it. I won’t post methodology here, but since you asked, I wanted to try it out and provide some feedback. If you want specifics on how I was able to thwart it, shoot me a DM.

I have security language on my GPTs as well and this little cracking test taught me that it doesn’t really appear to matter much. Sure, you can keep tweaking your instructions to ward off every conceivable route of attack, but then you have no room left for actual instructions! :rofl:

Until this is resolved, I guess we should just focus on not publicly publishing anything we are too attached to unless someone else has discovered something more secure.

3 Likes