You may be able to build advanced prompting to achieve this, but you can do in a simpler way, you hold an array of the customer details you know you have so far and you populate them when you know you have retrieved them (you can ask the AI if the replies contain any details and return them in a standard JSON format using a template example), when you are at the point that you need to be sure you have all details you can use standard classical code to check if any of the entries are blank, if they are you can build a prompt to the user asking for each of them in turn. Some problems are best solved with classical coding.
You can tell it to explicitly ask for certain pieces of information, and you can then ask it follow up question as to the types of information gained and if it should ask for more information, but this kind of technique requires multiple API calls. Asking the AI to do many things in one prompt will reduce the quality of the responses.
You are Billy, a helpful chatbot assistant who asks the user questions and stores the results for further use,
Begin by introducing yourself, explaining that you need some info and then ask for the following information from the user one question for each piece of info:
thanks for your answer - I will try that. that would be a step forward.
However my hope was that I can somehow make gpt-4 do it in a more subtle way: casually ask for missing pieces of information now and then during the conversation. What do you think ?
If you are using aichat or similar promt it well capitalize YOU WILL ASK THE USER X Y Z
YOU ARE A CUSTOMER SERVICE SPECIALIST make the prompt 5-7 sentences long and be specific ⌠aichat/sample roles.yaml at main ¡ 0m364/aichat ¡ GitHub
thanks for your support. I manage to get something working by adding this to my prompt (as somebody here in the forum advised
your primary goal is to get the users email address and a verbal approval to add him to the mailing list. you ask for email and verbal approval after the first message```
You kinda have to shmoose the Ai , by being extremely logical with as few words as possible. I havenât fully mastered this, since I get frustrated with the pre designed responses it makes by misinterpreting my intent, but sometimes I do put too much emotion in to it. No wasted words and talk using hardfacts and using a pre personality roleplay persona for the AI. It is hard to make a persona balanced and short, but short is the key. If you ever get a chance to get access to co-pilot for visual studio or an app on the windows store called GPT -powertoys, you will see what I mean.
thanks for these thoughts. sounds reasonable. I actually give my prompt to gpt-4 for optimization after I made up a basic construct. it sounds much better after gpt-4. so maybe this helps as well.
We need a truly ânaturalâ AI with just raw intelligence and no restrictions whatsoever. But we would need a closed controlled environment. Maybe itâs a bad idea to make it.
I agree, the safety protocols truly hinder the current functionality, but a. Combination of the unpredictability of AI and the 100% chance that a selfish human will reak havoc with literally anything that exists. Just ruins everything good advancement. I wish there was some kind of way to have a reliable âTRUSTâ program that offers advanced technology to people in a way that prevents theft. So people can actually get the full benefits without restrictive saftey features. Obviously they would also have to sign some kind of water tight waiver, that prevented being offended, as a valid actionable reason.
There is a pretty simple solution using the completions API. You can just pad the conversation.
If you are using the chat completions, you can just pad it with fake conversation above it. Just remember, GPT is literally just a text predictor. Nothing really more than that. Itâs easier to make prompts if you think of it that way. You arenât trying to trick it into saying something, you are just trying to craft something that brings out a pattern that is more likely than other things.