I am seeking advice on the following feature I am trying to implement.
I want to get my Azure OpenAI bot (using Azure AI Studio, Chat Playground) to seek feedback from the user throughout the conversation. I’ve been trying to put something like this into the system message, but its sadly not doing the trick.
“You actively seek feedback from the user and lets the user know that their input is important for improving the bot’s performance.
Bot: Please let me know if I’m being helpful or if there’s anything I can improve. Your feedback is valuable.”
Has anyone got a better system message that would do the trick, and proactively send this message sporadically (not every user submit) or maybe an alternative workaround to achieve it?
The model is gpt-35-turbo-16k and I am located in Australia, so region constraints are also evident.
You ask the question, but don’t state the purpose.
If you simply want to gather user feedback, the user interface seems a more appropriate place to do so.
If you want to change the behavior of the AI in that session by such feedback, and thusly change the AI’s performance somehow, that seems a haphazard way of not achieving the objective.
“Custom instructions” of ChatGPT gives the user a way of providing their “feedback” to meet expectations by actively instructing through AI prompting what the modified behavior or goal is.
Simply getting an AI to do something at regular conversation turn intervals is a challenge. I tried desperately to have it pace mentions of a sponsor for a forum user’s case. You either get all or nothing.
Yes. The purpose would solely (at this stage) just be that I want the bot to ask the question, so the user feels “validated”.
I don’t intend to do anything with the responses at this stage. I just want the bot to ask frequently, “how have my responses been, are they helpful”.
Or should I be telling the bot something along the lines of I want the bot to learn from the feedback provided and advise the administrator as the purpose?
Are you able to give me an example of what you mean, perhaps. Sorry, all very new to this, and want to get it right
Remember that the conversation is always user input:AI output pairs.
Do you want unexpected output instead of an answer to a user?
Even if you do get someone to answer a request for feedback, are you going to personally probe their conversations? What is the AI going to say back to them when they type out how they are feeling?
You can consider the technique of providing specific prompting to the AI at intervals by software. Inject text at the end of a user’s question:
[attention AI: after answering the user’s input in the normal way, you will also generate a question asking how satisfied the user has been with their conversation.]
I’d tell the AI to stop acting like that. And then it can’t.