There's No Way to Protect Custom GPT Instructions

I understand what you’re saying here, I’ve been toying with making games using AI for about a year now, it’s really fun!

Here’s a screenshot from an old one I made for fun, it’s about inviting your friends over to make a game with AI:


My name isn’t Lars btw, I just got lazy and used my own profile picture as an avatar.

The best advice I can give you is to use API calls for anything in your GPT’s game that you want to be deterministic, stuff like this:

Is better solved by a few lines of code, that massive amounts of instructions :heart:

3 Likes

I’m in the process of writing a guide on how to do just that, do you want me to @ you when it’s up?

1 Like

Very cool screenshot haha. Love it.

The best advice I can give you is to use API calls for anything in your GPT’s game that you want to be deterministic

I see, yeah haven’t tried that, great idea.

1 Like

There are good reasons to agree with both sides but it is definitely possible to achieve excellent results with proper prompting. I am sure many of us have seen this promiment example from NASA:

What I see being discussed in this thread is to find a way to enable developers of custom GPTs to protect their work, which is still IP, to a higher degree.

And the way how I understand the comment from @N2U is to pretty much forward the user request to another server, process the request and run some checks on the output.
It would probably be a good match to create a custom GPT that can walk a developer through the process of setting this up properly.

3 Likes

Thank you!

If you have any problems getting started, just make a topic about it here on the forum, there’s lots of helpful people here :laughing:

2 Likes

Totally agree that you can achieve amazing results with proper prompting. Cool link btw, I think I’ll play around with that!

It does require a SEMANTIC_SCHOLAR_API_KEY key in the setup, so it’s not entirely “just prompting”

What I’m saying is that that some carefully constructed API calls can give a GPT the extra “secret sauce” needed to be better than the rest :laughing:

1 Like

Well, sometimes it appears the heavy lifting behind crafting excellent prompts is treated like something that is already a thing of the past.
This is apparently especially frustrating for developers who are not just learning that custom GPTs fail at this task because LLMs themselves can’t achieve this goal without support functions.
But let’s not forget about the hours and money that have to be invested into great prompts, their development and evaluation before going into prod.

If somebody wants to protect their prompts or other IP, do not provide it to a out-of-the-box custom GPT. And I guess that’s where we all agree.

1 Like

I agree for GPTs. For Assistants, no way. I also hope that GPTs will have the ability for dynamic instructions.

For myself I have multiple prompts depending on the current situation (is it a function call? Is it a retrieval? What stage is the user on and what’s their settings?)

A quick example is a Language instructor.
The prompt can/should change based on the current level of understanding and some background information about the user. For an exam I can switch the prompt to focus more on function calling, and the finally change the prompt to return the results.

All of this instead of stuffing with everything and essentially having a noisy prompt with information that doesn’t relate to the current message

Not sure if GPTs can change their instructions throughout a conversation, but I’d love to see it, as it’s very helpful with assistants.

3 Likes

Can you expand on this?

The topic is about custom GPTs. What exactly do you disagree with?

That’s why I started with “I agree for GPTs”.

I was just adding some forward-looking thoughts. Dynamic instructions so far have worked great for me. It can (& I hope it is) be that in the future the same will be applied for GPTs.

I disagree with this (I know it was more of an observation than opinion). I’m expecting instructions to evolve

1 Like

Thanks @marek4 I will give those instructions a try

I’m still building a mental model on what should go in my API and what’s best as instructions. This is a good rule of thumb.

1 Like

That’s exactly it. Most of my experience is developing against the API. With custom GPTs things are less clear-cut to me. If some minor prompt-engineering can make it more troublesome for people to access the instructions then it seems worthwhile. Obviously it can never be infallible mainly due to the attention limit @BPS_Software mentions above.

1 Like

I believe there is a way to protect knowledge files in a way that no prompt injection can circumvent, but I’m concerned it might violate OpenAI T&C

1 Like

Hmmm… Don’t agree with this. Even if it makes it “more troublesome”, then why wouldn’t someone just still “steal” it and then re-create it with more pure instructions?

I’m sure the thought of “anti-theft as a deterrent” can be applied here, but I’d argue that if it lowers the overall quality of the responses (even slightly) then it’s not worth it. This is DRM - a flawed practice.

I still don’t understand why you want to provide knowledge files to a chatbot but ALSO want to hide some of the information.

This reminds me of websites that provide information and then make it impossible to right-click or highlight text. It’s just ridiculous and lowers the quality of the page. Someone who wants to copy it all will accomplish it. People who want to just simply take a passage out for a quote or for their paper are given an obstacle and may go elsewhere.

1 Like

If it lowers quality then yes, I agree. It depends on that “if”.

Would I use these techniques? No, I don’t for any of the GPTs I’ve created with one exception, a GPT I created specifically as a PoC to test these techniques.

My hope when I started this thread was more of an academic discussion on what is and isn’t possible. I’m viewing this more from that perspective.

Clearly there is some misunderstanding here…

2 Likes

It’s worth investigating.

Fair. I was under the thought that this started as a binary “Am I correct in thinking that there’s no way to protect…”, which was answered, so maybe moving onto “why” would be a decent discussion.

2 Likes

I’m happy to edit the thread title.

My understanding and perspective has changed by reading what everyone has said which has been valuable to me so I’m thankful for that.

1 Like

We can go back to this if you’d like :laughing:

I think we can already conclude that you can’t provide GPT with information that you don’t expect it to repeat back to you at some level.

1 Like

No, it’s honestly okay. I can completely understand that conversations can evolve/change, they would be boring if they didn’t.

There must be a misunderstanding, and if so I’m sorry for that.

Completely agree. I’d like to think this is also part of the philosophy of creating GPTs. Everything given to it should be considered public-facing. There can be some management done using actions though.

3 Likes