I am building custom GPTs (specifically games - but the post applies to any GPT) that combine some fairly long text prompts, various files in the Knowledge and some Python scripts (a few hundred lines of code). Of course, code interpreter is enabled.
I am not particularly concerned by people looking into my prompts and files (this might ruin the game experience, but it’s their problem). However, I am concerned by malicious users breaking into one of my GPTs, downloading everything and duplicating it, should the GPT achieve a little bit of recognition.
Having spent weeks designing and refining a game, I’d be annoyed to see many exact copies of it flooding the GPT Store under other builders’ names. I am aware this may be wishful thinking and I may never get to the stage in which one of my GPTs will be interesting enough to be duplicated; but I want to be ready for that unlikely occurrence.
I understand that a solution is to hide some of the GPT inner workings behind Actions. In my case, I could replace some of the Python scripts, currently called by the GPT and run locally, with Actions that effectively execute the same code remotely on some server (e.g., using this).
Questions:
For the GPT builders and developers here:
Does the plan above make sense as an option to “protect” the GPT at least from easy duplication?
People can still imitate the GPT if they figure out what the remote Python scripts do, which is not too hard to be honest, but it seems to require a way larger amount of effort.
How viable/feasible is it? (I’d need to set up a server to run this Python code I guess; not sure how quick or expensive is that.)
Given I have no experience working with APIs, web servers, etc. any hint on where to start (good tutorials for beginners, etc.), should I decide to go this way? (Besides the Actions documentation and asking GPT-4 for help…)
Would Actions work as well as calling a Python function?
At the moment, my GPTs are working fairly well and relatively reliably, but it took a lot of tinkering to get the GPT to remember to call the functions at the right time, passing the right arguments, etc. I am concerned that switching to Actions would break everything down (of course some editing will be needed, but I am concerned that it will simply stop working, e.g. because calling Actions is perhaps intrinsically “more complex/cumbersome” than just calling a Python function).
Any better solutions?
Apologies for the many questions - and thanks in advance for any opinions or comments.
The only strategy to “protect” custom GPTs recognized as viable is to move functionally from userspace to serverspace by using actions.
This is easy enough to do for a single instance, it gets much complicated if you need to serve 1,000 or more concurrent python sandboxes. But, if it’s “stateless” and you just need one global instance rather than a unique one for each user, it’s not quite trivial but easily doable.
The GPT doesn’t care where it gets the result from. Your python script takes some input and returns an output, an API takes some input and returns an output, they’re basically black boxes—the model doesn’t need to care what’s going on inside so they should be equivalent.
To simplify things, I can keep the state local (in the GPT) and keep the server side stateless, that should not be a major issue in my case. Of course this means that players can mess with the local state, but again, that’s more their problem than mine, and it would be no worse than now.
Abstractly yes, I see how they are just black-boxes with an interface and an output.
I was being more practical in terms of size of the added instructions for Actions and stuff like that. Now I only need a couple of instruction lines to tell the GPT how to call a Python function uploaded in the Knowledge. I just wonder if the equivalent Action would take up much more space in terms of tokens to be defined; if it’s going to be equally easy/salient for the attention mechanism to remember (or it’s going to work but by stealing attention from elsewhere), etc.
This is an empirical question that perhaps someone here has experienced.
Hi, jumping on this thread as I have a similar question, see my post below.
Just wondering how this went for you - were you able to set up successfully and did it work how you wanted it to? Any recommended resources for novices that helped you?
They are too unreliable, too exposed, there is no developer support, things keep changing behind the scenes, they are painful to debug - they are just overall terrible to work with as a developer. After several months I just came to the conclusion that they are not worth my time, and it was just a mistake on my side to try to use them for something that they are clearly not built for (of course YMMV).
I effectively switched to (something equivalent to) LangChain and API calls, which also freed me from OpenAI, I can just plug in any model I want.
What would be nice in the future is if OpenAI would either allow developers to develop external apps fueled by ChatGPT Pro subscriptions or provide a real “GPT developer kit”, but I guess neither are likely to happen. GPTs are laundry buddies.