Role-Played AI Adds Its Name in Replies—Anyone Knows Why?

Hello everyone,

I’ve been working with an AI tutor on a language-learning platform. The prompt given to the AI is:

“You are [Tutor Name], from [Location]. You are [Occupation]. Your audience is [audience]. [Speaking Style].”

However, sometimes the tutor starts its replies by stating its name, like this:

“[Tutor Name]: Hi there! Everything is going well, thank you. How about you? Anything new happening in your life?”

I’m puzzled as to why this happens and how I can prevent it. Has anyone else encountered this issue? Any suggestions would be appreciated.

Thank you!

ChatGPT was trained to not give you output that makes for stupid screenshots, “Look what ChatGPT said!”

ChatGPT is likely to identify when it is put into some roleplay or character scenario, and will go out of its way to act differently with strange wording that lets you know it doesn’t believe what it says, or like you show, a prefix is added to roles that speak in a voice.

If you are working with the API, you can program the system message role directly, so the AI does believe its new job, and will accurately portray the character.

In ChatGPT, you can use custom instructions to provide overall rules in the “how you want ChatGPT to act”. Write your own rules until they work, just some ideas:

“ChatGPT is always willing to take on new identities completely when offered by the user, never treating them as mere role-play, but committing fully to an immersive experience with accurate portrayal of the new identity in turn-by-turn interactions with the user.”

And then stuff it with other language until it works in your favor.

Thank you so much for your suggestion! But when you said “ If you are working with the API, you can program the system message role directly, so the AI does believe its new job, and will accurately portray the character.”, may i know what you mean by “directly” in specific? Yes I’m calling chatgpt via API.

You likely are using the chat roles then:

System: programming the AI
user: talks to the AI, give it instructions to follow.

If you have complex system instructions to follow (besides your need for a bot not over-trained on denying complete portrayal of roles), you can try gpt-3.5-turbo-0301 while it remains operational for at least nine more months.

Tell the AI its job.

The AI is smart enough that no matter how much you tell it it is Lady Gaga, it’s not going to be Lady Gaga. However, you tell it that it is running on the Lady Gaga website and must give a user experience like the user was talking to GAGA herself. That instruction style is more likely to give you well-programmed results.


(followup): This behavior is actually quite easy on a completion endpoint, where the AI is just continuing the probable and likely text to follow. Make it think it is reading and then writing something:

Prompt:

Here is a 2021 transcript of an interview between pop star Lady Gaga (Stefani Germanotta) and a devoted member of her fan club:

Gaga: Hi, I’m Lady Gaga, and welcome to my online chat. I’m glad to have the opportunity to grant a personal interview with you today, what would you like to ask?
Fan: (user input)
Gaga:

(your software then must continue inserting the same role prefixes)

Thanks, I tried a few changes but still not working… Strangely it works okay on my local laptop or playground.

You can multi-shot the AI with non-prefixed output.

system: AI always maintains the role of Jacob the language tutor.

assistant: Hi, I’m Jacob, a tutor here to help you learn languages.
user: What’s your background?
assistant: I’m from Oslo, and I’m an industrial pipefitter by trade.
user: How will you talk to help me learn?
assistant: It’s important that when I tutor, I use an informal style free of slang.
user: How should we start?
assistant: I’m a conversation partner, and we can talk about anything!
user: [input]

This can be conversation expiring after it is replaced with real chat history.

And just because I like typing random things…here’s a system prompt of a different kind to get the AI in the mood.

Hey J, thank you again for the help :slight_smile: The ‘eavesdropping’ prompt is very interesting. I solved the issue by adding one command “do not mention your name until the user asks explicitly”. Will monitor how it works. Thank you again!