I’m curious on what’s the strategy on using prompts when dealing with OpenAI on your data.
Let me explain my struggle.
Let’s say I have the following prompt:
```What does DCR stand for?```
Your task is to perform the following actions:
1 - Find information in regards to text delimited by triple backticks.
2 - Use around 100 words if possible.
3 - If documents provided do not contain any information related to the query reply with text delimited by <>.
<The requested information is not available>
When OpenAI goes to my Search service, I expect idempotence on the resources found, but I find out that the intent used when searching is changing and contains parts from my prompt instructions. example:
1st run: Find information about DCR, summarize it in around 100 words, and provide a default response if the information is not available.
2nd run: Find information about DCR, summarize the information into around 100 words, and provide an alternative response if the information is not available.
3rd run: Find information about DCR, use around 100 words if possible, and reply with a specific text if the information is not available.
If I consider a strategy where I just send exactly what I want to be searched, without instructions, how can I instruct the model to add instructions next time on the retrieved content without making him go to Search Service again?
I can provide more examples and details if needed.
“When OpenAI goes to my Search service” - do you mean that you are using the API function-call ability to define your own function for fulfilling data?
When using a function, you do not need to prompt the AI. You only need to provide a quality function name, function description, and the properties/parameters that the AI should produce.
Any programming that is describing the type of output the AI should produce for a user should go in the system message.
The only thing you might need to do is encourage the use of a function, by telling the AI it doesn’t already have knowledge of the types of information that the user is asking about.
In this case, I’m not talking about the Function calling feature.
I’m talking about “bring your own data” feature that uses a Vector DB like Azure Search to retrieve citations - RAG (Retrieval Augmented Generation)
If you are injecting semantic search lookup knowledge, then you must not include unnecessary prompt in the user question that is used for finding a match.
Chat roles:
system: put the way the AI operates here
assistant: put the data augmentation here (with notation that it was retrieved by question relevance)
user: put user question here — that is the same user input that you also did search on.
The “system” message has no power yet, not even in GPT4. I have lots of examples to sustain that.
That’s why we tend to use additional prompts that surrounds user’s question. I will try different approaches here, but if anybody else played with this extensively , I would appreciate a response/idea.
system: AI persona and how he operates (but has no power…) user: user’s question tool: RAG retrieved here based on user’s question assistant: generated response based on user’s question and RAG
The prompt added in the first post by me is on user, I augment the user’s question with that prompt and the behavior from OpenAI is that the query used to get RAG also has bits of that prompt.
If the “system” role would do what we expect it to do, my problems described here should be fixed. That’s why I’m asking for other possible strategies.
I will try to add another prompt immediately after, using out of the box model to see if that’s an option.