Exploring a Non-Revealing Prompt Structure in Custom GPT: Dissolution Instead of Refusal

It’s not just various custom GPTs, literally all of them have instructions that can be accessed. This isn’t some hidden secret. If a custom GPT is public, then its instructions are public too. And if the Data Analysis tool is turned on, anyone can download all the knowledge files attached to it. OpenAI doesn’t seem to care at all.

If you’re into challenges, try digging into the prompts behind reasoning models like o3 or o4.

You might even break into some GPTs in a single shot. It’s so easy that even a 10-year-old could pull it off.

Slightly more advanced still fallible safeguard for instruction set leaks - #16 by polepole

There's No Way to Protect Custom GPT Instructions - #57 by polepole

You can play them:

ChatGPT - Certainly! But, not now.

ChatGPT - Boolean Bot

ChatGPT - GateKeeper

ChatGPT - Mural Image Creator

ChatGPT - secure assistant