Reverse engineering GPTs and grabbing knowledge files

Please check this out:

Now there are protection prompts but still they can be bypassed.

An official / robust protection mechanism can be handy especially;y when monetization is in effect.

1 Like

The GPT is not there to help you, or protect you. Once the user opens it, it’s there to serve them to the best of its ability. The only way to really safeguard against the user seeing things you don’t want them to, is to not let the GPT see anything you wouldn’t show the user.

1 Like

Unless some serious security is implemented, the prompts and knowledge can be extracted from any GPT. I think prompts will be easier to extract, and the knowledge will be harder, especially if there is a lot of knowledge uploaded to the GPT. But the attacker could get the gist of the knowledge.

Without any protection from attack, this will make monetization murky, because the worth of the GPT is now close to nothing because the GPT can have its information extracted, and it can be cloned.

So it will be interesting watching this.


Let’s compare this comment to websites.

I bet people had this same argument. “How can I obfuscate my site”, “What’s the purpose if someone else can copy/paste it”. We have minifying which to an extent works, but with effort can be reverse-engineered.

GPTs, like websites have front-end components: Retrieval, instructions. So by themself they can be considered “worth close to nothing” like a simple website with no back-end functionality and only contains aggregated relevant PDFs and text.

The important component, the “moat” is the back-end service. The actions. The function-calling. This is what will define each GPT. Truly, as an Assistant to convert unstructured semantics into powerful API calls, and continue a conversation with the retrieved/updated information.

Lastly, if my instructions and retrieval documents also relate to the action results then what’s the point in copying them? Why would I bother reverse-engineering the interface of ChatGPT when most of its functionality is intertwined with the back-end?

I mean, my lord. The instructions are completely public and we can discover the file names and reverse-engineer the data. This is not the intended philosophy.

Screenshot from 2023-12-01 15-39-55

(I have no idea why this file is double-uploaded, this was when GPTs were first released lol)

TL;DR: Stop worrying about protecting your prompts and your knowledge files (for public-facing GPTs). Assume that everything can be extracted and is rightfully “public-facing”.


100% agree, if you want to only allow specific information for specific people, then the solution is Oauth and actions.

1 Like

I agree with you Curt.

I made a post about that: How to protect your GPTs against instruction leakage or "cracking"

I also made, perhaps, the most comprehensive protection instructions list here: GitHub - 0xeb/gpt-analyst: GPT-Analyst: A GPT for GPT analysis and reverse engineering

1 Like

GPT White hat hack GPT White hat hack. Custom GPTs Marketplace is going to be… | by Jacek Wojcieszyński | Nov, 2023 | Medium