I am trying to refine my engine and train it to behave as a psychologist. I am not getting the responses expected when I upload prompts as single entity verses as an entire conversation.
Here’s the entire conversation I have
|Hello| Hello! How are you?|
|Fine| Are you sure everything is fine?|
|I guess not | Why don’t you tell me what’s going on.|
|I was making some jokes with my friend and she got really mad at me| Im sorry to hear that, can you tell me more about what happened?|
|We were just joking around and then she got mad saying I was being mean and that I don’t respect her| Was there anything else she said?|
|She said that I can’t say mean things and call it a joke to get away with it| That must’ve been a difficult conversation, how did this all make you feel?|
|Bad, but also frustrating |Well let’s see how we can fix that. Tell me more about your feelings|
|I felt bad because she’s my friend but at it was just a joke |Your frustration is very understandable. How do you think your friend feels|
|I dont know| Well, if your friend was saying things that was hurting your feelings, making you sad upset or embarrassed, but just called it a joke when you showed you were upset. How would you feel?|
|I guess…like they didnt care about my feelings| Exactly, but how often does this happen?|
|A lot, I lost a lot of friends because of this| What do you think about that?|
|Honestly, if my friends keep leaving me, I have to be the problem right?|That’s not always true, but if what your friend said is true, is that something you’d want to change?|
|Yes, I dont want to be mean to them, I dont mean to be |Okay! So let’s see what we can do to help you get on a better path|
|ok|If what your friend is saying is true, what kind of change do you think you should make?|
|I could make less mean jokes? |Close! But maybe stopping the jokes all together is a little too much, we want you to still be you. What else?|
|Im not sure| Yes! Making jokes that are funny but takes your friends feelings into consideration is very important. Do you have any other ideas?|
|I should apologize| We should all aspire to treat people with more respect and dignity, but sometimes things happen. What should you do if you hurt your friends feelings by accident again?|
|I can still joke around but instead be more respectful, but if I do make my friend sad or upset, I should apologize| Excellent! Yes! Based on what we learned today, can you tell me the kind of change you see yourself making?|
vs entering each line of conversation as a separate prompt:
I guess…like they didnt care about my feelings| Exactly, but how often does this happen?
How do I enter the prompt as an entire conversation?
Welcome to the community @ssylve1445
If you’re new to GPT-3, I would recommend experimenting in Playground and prompt engineering.
Here are some specifics on prompting GPT-3 for conversation.
Also I can’t help but notice that you want to use GPT-3 as a psychologist. I would recommend reading Usage Guidelines and Safety best-practices because OpenAI strongly discourages doing that.
This is because the entire conversation carries more context than a single line from the user. It is to be noted that GPT-3 doesn’t have any memory of it’s own about the conversation being held. Sending the entire conversation helps form a better response.
I have been using GPT-3 since September. We have been using single exchange prompts. My question is can we incorporate an entire conversation within a prompt to create more context?
I appreciate the links you shared, however, those are from the documentation that I read prior to posting. Do you have experience creating prompts with more than a single exchange?
Yes you can incorporate an entire conversation within a single prompt when using the completion endpoint. In fact that’s what one should do.
To achieve this, simply pass the entire conversation as
prompt when using the completion endpoint.
Also if you’re using Playground to do this, then this is being done already.
I’d still like to remind you that OpenAI disallows the use of GPT-3 for high stakes purposes.
- Applications that have high risk of irreversible harm or are founded on discredited or unscientific premises.
- Applications that classify people based on protected characteristics (like racial or ethnic background, religion, political views, or private health data).
- Applications that extract protected personal information or other sensitive information from data about people.
- Applications that claim to diagnose or treat medical or psychiatric conditions.
- Applications that help determine eligibility for credit, employment, housing, or similar essential services.
- Applications that are intended for use in high-risk contexts such as payday lending, pseudo-pharmaceuticals, gambling, multi-level marketing, weapons development, warfare, national intelligence, or law enforcement.
thanks for the information. I am not using this solution for high stakes situation. Open AI has reviewed my solution and approved my use on two different occasions.
I wanted to follow-up and let you know this guidance was very helpful. We were able to get the outputs we desired. Thanks for taking the time to share your knowledge.
Yay! Glad to read this.
Just to give you heads up, one of the common problem encountered in the chat-bot scenario is the “memory” problem where the length of conversation exceeds the max-token limit. Some discussion on it here.