We’re giving our bot several chunks of data. No matter how I rephrase it it always will find a way to say something like “Based on the context provided”, “Based on the provided text” or something similar. I’ve tried many variations of bot system and user prompts and have yet to be successful. Any tips with this? Thanks
Best trick I know for this problem is to load the conversation with an existing style.
So for example, if you want the AI to be a summarizer, you can have something like…
User: Please tell me what you think about this topic: [short blurb about insect life cycles]
AI: Insects have 3 life stages … [rest of the hand crafted prompt in style you want]
GPT takes really strong cues about the style it should continue in based on the upstream text. It’s also one of the reasons long running conversations get stuck in weird undesirable behavior. So, if you just load the conversation in a certain way with the two fake starting prompts, it should continue in that style more closely.
Be careful with your system prompt as well, depending on how you write instructions in there it could influence the style as well. Not just because of instructions, but the tone and style of writing does.
Wasn’t able to get the desired result. Not sure if I implemented what you suggested in the way you meant. Basically added examples of Q&A in the prompt with a different writing style, but still the same result.
Here’s our base prompt currently
Answer the question based on the context below, divided into different sections by the sequence "---". Please specify the speaker of each section if it is mentioned. If the context provided does not give any information on how to respond, just say "Could you rephrase your question and be more specific?" If the question asked is not clear enough, please use the previous exchanges for extra context.
Context:
[CONTEXT]
Previous Exchanges:
[PREVIOUS_QUESTIONS]
Question:
[QUESTION]
Answer:
System prompt is basic
You are a helpful assistant answering questions based on the context provided.
Have tried many iterations. Removing all reference to context. And it just changes this to things like “based on the information provided” instead of context. We just don’t want it to refer to info but to answer like it is speaking as the authority.
Note where it says “Based on the context provided” in the image.
Or “The guidance provided”. Which we got it to say be prefacing each chunk added to the prompt with something like “This is the guidance from the community” or something similar.
Any ideas how we could augment this prompt specifically for the desired result? Gracias
I think this alone in the system prompt is causing it to spit it out in responses when it doesn’t have all the context maybe… I’d try rewriting the system prompt.
I think a pretty simple alternate token generation path instruction would help.
“When answering questions based on the context provided from documentation, do not disclose that you are sourcing information from documentation, just begin response.”