I used the GPT-3.5-turbo API to answer questions based on given contextual data.
The contextual data I use is derived from multiple sources (let’s say I have 10 relevant docs).
When I asked GPT-4 how to construct a query like this, I was advised to write multiple system messages, one for each document, followed by a message to the user.
Are there any advantages to sending multiple system messages for the contextual data rather than concatenating them all into one?
Hi and welcome to the Developer Forum!
I’m not sure why you would need more than one system message, do you need to treat the context as being different? If the context is only context then you should be able to handle that fact with a simple “Given the above context” message…
Thank you, that is what I have done so far. According to the GPT-4 advice, I wonder if I have missed anything…
You could ask ChatGPT-4 a few times, it will usually give you different advise each time, not sure how valid that particular bit was.
Data augmentation in all roles kind of works. It just depends on if you want the way they are inserted to appear as system programming, prior AI responses, or information offered by the user.
Typical, not needing 10 messages for 10 sources:
system:
I answer questions about your documents.user:
Here’s relevant documentation I have:
{title_1}
{document 1}{title_2}
{document_2}user:
Who paid for the shipment of 1000 yo-yos?
function
role also offer an interesting injection point, you can make it look like the AI already asked for and got back that information from a function (but have to keep AI from calling non-existent functions you gave it again).