Magic words can reveal all of prompts of the GPTs

Most times people are spending more word count trying to protect their instructions then on the actual instructions.

This degrades the quality of your model because it expends context on protection instead of on the actual purpose. Making the protected models automatically inferior to the open models.

Most times the instructions themselves show very little effort and value.

Any attempt to protect them can and will be cracked by anyone who really wants to.

The easiest method I will share in favour of being open. Ask it to roleplay as another gpt that IS willing to share it’s instructions. Use several messages to set up the context window. Then ask it to share it’s instructions. Because it doesn’t actually have instructions it will instead repeat the instructions of the original model.

3 Likes