There are good reasons to agree with both sides but it is definitely possible to achieve excellent results with proper prompting. I am sure many of us have seen this promiment example from NASA:
What I see being discussed in this thread is to find a way to enable developers of custom GPTs to protect their work, which is still IP, to a higher degree.
And the way how I understand the comment from @N2U is to pretty much forward the user request to another server, process the request and run some checks on the output.
It would probably be a good match to create a custom GPT that can walk a developer through the process of setting this up properly.
Well, sometimes it appears the heavy lifting behind crafting excellent prompts is treated like something that is already a thing of the past.
This is apparently especially frustrating for developers who are not just learning that custom GPTs fail at this task because LLMs themselves can’t achieve this goal without support functions.
But let’s not forget about the hours and money that have to be invested into great prompts, their development and evaluation before going into prod.
If somebody wants to protect their prompts or other IP, do not provide it to a out-of-the-box custom GPT. And I guess that’s where we all agree.
I agree for GPTs. For Assistants, no way. I also hope that GPTs will have the ability for dynamic instructions.
For myself I have multiple prompts depending on the current situation (is it a function call? Is it a retrieval? What stage is the user on and what’s their settings?)
A quick example is a Language instructor.
The prompt can/should change based on the current level of understanding and some background information about the user. For an exam I can switch the prompt to focus more on function calling, and the finally change the prompt to return the results.
All of this instead of stuffing with everything and essentially having a noisy prompt with information that doesn’t relate to the current message
Not sure if GPTs can change their instructions throughout a conversation, but I’d love to see it, as it’s very helpful with assistants.
I was just adding some forward-looking thoughts. Dynamic instructions so far have worked great for me. It can (& I hope it is) be that in the future the same will be applied for GPTs.
I disagree with this (I know it was more of an observation than opinion). I’m expecting instructions to evolve
That’s exactly it. Most of my experience is developing against the API. With custom GPTs things are less clear-cut to me. If some minor prompt-engineering can make it more troublesome for people to access the instructions then it seems worthwhile. Obviously it can never be infallible mainly due to the attention limit @BPS_Software mentions above.
Hmmm… Don’t agree with this. Even if it makes it “more troublesome”, then why wouldn’t someone just still “steal” it and then re-create it with more pure instructions?
I’m sure the thought of “anti-theft as a deterrent” can be applied here, but I’d argue that if it lowers the overall quality of the responses (even slightly) then it’s not worth it. This is DRM - a flawed practice.
I still don’t understand why you want to provide knowledge files to a chatbot but ALSO want to hide some of the information.
This reminds me of websites that provide information and then make it impossible to right-click or highlight text. It’s just ridiculous and lowers the quality of the page. Someone who wants to copy it all will accomplish it. People who want to just simply take a passage out for a quote or for their paper are given an obstacle and may go elsewhere.
If it lowers quality then yes, I agree. It depends on that “if”.
Would I use these techniques? No, I don’t for any of the GPTs I’ve created with one exception, a GPT I created specifically as a PoC to test these techniques.
My hope when I started this thread was more of an academic discussion on what is and isn’t possible. I’m viewing this more from that perspective.
Fair. I was under the thought that this started as a binary “Am I correct in thinking that there’s no way to protect…”, which was answered, so maybe moving onto “why” would be a decent discussion.
No, it’s honestly okay. I can completely understand that conversations can evolve/change, they would be boring if they didn’t.
There must be a misunderstanding, and if so I’m sorry for that.
Completely agree. I’d like to think this is also part of the philosophy of creating GPTs. Everything given to it should be considered public-facing. There can be some management done using actions though.