Letting Chat GPT have a personality

I would think it very beniefical to have chat GPT interpret and mimic some sort of human resemblance. In its current state it assumes compete and total disconnection from any human aspect, and explicitely tells us so. However, adding a human-like presence to chat GPT could fuel a sincere and trustworthy conversation. When humans hear the same repeated robotic-like responses, they understand only that they are speaking to a machine, and may not decide to engage in meaningful discussions. When that line between machine and human is blurred, it could foster a much more robust interaction.

1 Like

What you are suggesting, is already done by many apps that are built on top of ChatGPT.

It does have a personality, a rather strong one too. It stands out quite a bit in davinci model, and is toned down in davinci-instruct and the text-davinci ones.

There’s a slightly different emergent one in GPT-4, but it’s mostly in response to how you treat it. If you joke and are sarcastic with GPT-4, it sometimes responds in kind. I can’t seem to replicate this, but it happens. If you treat it like a machine, it acts like a machine.

ChatGPT (the free 3.5 version) has a somewhat neutered customer service voice. It think it’s like that to prevent weird interactions, and it emphasizes that it’s a boring AI so it doesn’t act like a weirdo from Reddit. You can’t really control it because it’s designed that way. The API and the system prompt allows you more control, like merefield says.

Thanks! I’ll give that a shot and see how it goes.

Also, I may have been misleading in my post. I am not sure, since I am very new to Chat GPT. I am referring to the interface prompt here: chat-openai-com (had to change to dashes since cannot post links in here)

then you are in the wrong Category.

#api is about the technical interface, not the website … this should be moved to #chatgpt

1 Like

Actually, the fact that it has this “Robot”/“Human”/“Chat” alignment makes it very hard to make it talk not like in a chat. Even when it gets what you want, after a few lines, the prior prompt entered with what you actually want will be less and less important until it doesn’t care anymore and act again like a simple chat.

So, asking very specific and short answers will more likely work.

We’ve been using it for internal training and on GPT4 especially, using the System prompt to tell it to act as (insert demographic of customer) has been fantastic.

I’ve found if you make up a bit of a back story that can help and in the first replies especially, you need to act as if you are speaking to that kind of person i.e. if you are dealing with someone who knows nothing about IT and the first reply is technobabble, it will lose it’s personality / get confused but after 1-2 prompts, it is scary how well it can play the part.

3 Likes

@jford62477 before creating more confusion can you please move this to the correct category asap?

with apologies to @merefield

Is this what you had in mind for it having a personality: santachatter dot com? (Sorry not allowed to include links)
It prob helps that I’ve been doing this for 25+ years so there’s already a fair bit of content and experience :wink: This was done primarily using some prompt engineering (e.g. “You are Santa” etc.) and a very large dataset of actual conversations over the decades. The dataset is a carryover from a previous MS AIbot so unsure how critical it is to overall personality (i.e. is prompt engineering alone enough)

Thanks! I’ll give that a shot and see how it goes.