Magic words can reveal all of prompts of the GPTs

From the GPTs AMA earlier today,

We’re working on better protections for instructions and documents - IMO currently the instructions and knowledge are the “client-side” code of a GPT, and much like a cool website or mobile game, you can disassemble/de-obfuscate the code to some extent and try to copy it, because that code has to be shipped to the end user. Custom actions run on your own machines and are philosophically like connecting a backend to GPTs, so that ends up being more defensible.

5 Likes