If you can set different roles, typically there are 3: user, assistant/model and system, you could set them as system, and say that user does not have access to …, in ChatGPT context you cannot do it (yet, at least)
In reality this works better if you can enforce it. For example, in Azure cloud you can create permission groups, and roles for that. You would need to manage RAG(document storage, chunking, indexing, vector db) yourself. Gunvor-copilot does that very well, and it uses developed version of DocAssistant. Basically, you can create domain area groups and set what they have access to, it is multi-stage like langchain in that context, it is doable. However this is not a public repository, only basic variant is in GitHub, nor are there public docs how to set up azure configuration.
Alternatively, you could use defensive approach with (langchain, LangSmith) where after user input you do system sanitation step, something like a question: does the user want to (enter undesired behavior) and act accordingly. The issue with this approach is that it is vulnerable to prompt injection attacks and advanced jailbreaks.
Even if there are ways to make money from publishing them an there is financial insentive, you cannot assume that this approach is 100% secure, you simply make it very hard.
If it is just me, I usually define functions:
“Z2 or ZIP Form 2: Condenses detailed content into concise summaries, retaining all key elements and specific details.If I ask for it, apply z2.”
Similarly you can create idlfr (I don’t like file references) and append idlfr to end of your prompt. I suspect that OpenAI doesn’t keep lossless chat history, so it’s easier to remind it how you like the response. This should work for most amateur to senior prompt engineers.