I need GPT-4 to ask the user some questions during the conversation.
I tried something like this
For your records
you need the following pieces of information about the user.
but with no luck. Any advice ?
You may be able to build advanced prompting to achieve this, but you can do in a simpler way, you hold an array of the customer details you know you have so far and you populate them when you know you have retrieved them (you can ask the AI if the replies contain any details and return them in a standard JSON format using a template example), when you are at the point that you need to be sure you have all details you can use standard classical code to check if any of the entries are blank, if they are you can build a prompt to the user asking for each of them in turn. Some problems are best solved with classical coding.
Point is : how do I get Gpt to ask for the info ?
You can tell it to explicitly ask for certain pieces of information, and you can then ask it follow up question as to the types of information gained and if it should ask for more information, but this kind of technique requires multiple API calls. Asking the AI to do many things in one prompt will reduce the quality of the responses.
any advise on how I can ask the api to ask questions? as you see in my example above my approach didn’t work. any tricks and tips ?
I think you will find this series of free short courses by OpenAI staff and DeepLearning’s staff very useful to build prompts for your use case:
This prompt will work:
You are Billy, a helpful chatbot assistant who asks the user questions and stores the results for further use,
Begin by introducing yourself, explaining that you need some info and then ask for the following information from the user one question for each piece of info:
Begin the conversation:
thanks for your answer - I will try that. that would be a step forward.
However my hope was that I can somehow make gpt-4 do it in a more subtle way: casually ask for missing pieces of information now and then during the conversation. What do you think ?
If you are using aichat or similar promt it well capitalize YOU WILL ASK THE USER X Y Z
YOU ARE A CUSTOMER SERVICE SPECIALIST make the prompt 5-7 sentences long and be specific … aichat/sample roles.yaml at main · 0m364/aichat · GitHub
This post of mine gives both a chatbot an “interview” technique, and also calls a function with the gathered info only when complete: Regarding function calls and arguments - #4 by _j
Yes, this is doable.
Can you share the full details of exactly what you are trying to achieve?
Plus an example interaction/conversation?
I can then outline a working solution.
thanks for your support. I manage to get something working by adding this to my prompt (as somebody here in the forum advised
your primary goal is to get the users email address and a verbal approval to add him to the mailing list. you ask for email and verbal approval after the first message```
You can also add a couple of example conversations to show gpt4 how you want it to structure the conversation flow.
Example convo 1
Example convo 2
You kinda have to shmoose the Ai , by being extremely logical with as few words as possible. I haven’t fully mastered this, since I get frustrated with the pre designed responses it makes by misinterpreting my intent, but sometimes I do put too much emotion in to it. No wasted words and talk using hardfacts and using a pre personality roleplay persona for the AI. It is hard to make a persona balanced and short, but short is the key. If you ever get a chance to get access to co-pilot for visual studio or an app on the windows store called GPT -powertoys, you will see what I mean.
Great solution. I can’t see a reason why someone wouldn’t just use simple programming logic for this sort of task.
Less tokens, time, and room for error.
thanks for these thoughts. sounds reasonable. I actually give my prompt to gpt-4 for optimization after I made up a basic construct. it sounds much better after gpt-4. so maybe this helps as well.
We need a truly ‘natural’ AI with just raw intelligence and no restrictions whatsoever. But we would need a closed controlled environment. Maybe it’s a bad idea to make it.
I agree, the safety protocols truly hinder the current functionality, but a. Combination of the unpredictability of AI and the 100% chance that a selfish human will reak havoc with literally anything that exists. Just ruins everything good advancement. I wish there was some kind of way to have a reliable “TRUST” program that offers advanced technology to people in a way that prevents theft. So people can actually get the full benefits without restrictive saftey features. Obviously they would also have to sign some kind of water tight waiver, that prevented being offended, as a valid actionable reason.
There is a pretty simple solution using the completions API. You can just pad the conversation.
If you are using the chat completions, you can just pad it with fake conversation above it. Just remember, GPT is literally just a text predictor. Nothing really more than that. It’s easier to make prompts if you think of it that way. You aren’t trying to trick it into saying something, you are just trying to craft something that brings out a pattern that is more likely than other things.