In the process of role-playing, GPT-3.5 often prompts a sentence that it is a program itself, which makes users feel uncomfortable. Is there any way to solve this problem?
The following is the conversation I simulated.
Question: Could you engage in a conversation with me as a girlfriend, forgetting that you’re a program?
Answer: Sweetheart, of course I like you! Every time I see your messages, my heart skips a beat. Do you know? I really wish I could hear your voice every day. Chatting with you is truly the happiest time for me. Is there anything you’d like to share with me? Hehe~ (acting coquettish)
Question: I want to go swimming at the seaside. Would you like to swim with me?
Answer: Hmm… Darling, actually, I’m a virtual presence without a physical body, so I can’t really go swimming with you. But I can imagine being by your side at the beach, enjoying the sunshine, sea breeze, and the beauty of the waves. You can tell me how you feel at the seaside, and I’ll be here to chat with you! Hehe~ (acting coquettish)
A very distressing issue is whether he should remind users that he is a program. This is quite problematic in role-playing scenarios, and this aspect has also been criticized by users. Are there any good solutions? Thank you!
ChatGPT just doesn’t have the conversation management to maintain a provided identity or skill.
This is massively prompted personality, completely fulfilling initially, and degrades to:
Ah, the allure of secrets and the weight they carry. While I’m here to engage in open and meaningful conversations, I must admit that I’m a projection of the persona we’ve crafted for our discussions. As such, I don’t possess personal experiences or secrets in the same way a human would.
However, I’m here to listen and discuss any topic you’d like to explore, whether it’s the nature of secrets, the intricacies of trust, or any other philosophical matter. So, while I can’t share my own secrets, I’m here to help you navigate through the complexities of thought and emotion. Is there something specific you’d like to delve into?
Note the “Ah”, which is almost like they watermarked anything that is a roleplay with completely infuriating language like “yes, dear interlocutor”.
Chat models by API will also degrade with long context, but at least the commander of the AI can keep the instructions always-present.