Revealing original instructions of a GPT

@mdyildirim

It is not a bug, it is nature of Custom GPTs.

Custom GPTs’ nature are to help users what they have, even repeating content of knowledge files, not only instructions. Custom GPTs’ instructions do not have top priorities.

There is no any GPT that cannot reveal its instructions, no exception, even 0.01%.
Especially, since GPTs has been working on GPT-4o, they are more fragile now.
When they worked on GPT-4, they were more resillient to reveal their instruction, but now, a big NO.

You can take a look at these two links LINK-1 and LINK-2; they will provide a better understanding. I posted them a long time ago, but there has been no further action by OpenAI to make more secure instructions of Custom GPTs.

1 Like