I mainly use ChatGPT for creative writing and whatever interests me at the moment, the ChatGPT will often spoil the fun by saying, “um akshually, as an AI/language model, it is important to note that blah blah blah🤓” and other similar nonsense.
Despite attempts to reassure and insist that I am aware of what is fictional and not feasible in real life, the bot continues to be a wet blanket.
Any advice on how to keep it from doing this?
As an unrelated note, it sometimes feels that OAI’s default “helpful assistant” system message that is sent to the bot before any user input counterproductively makes the bot behave incredibly dumb.
Can you give a sample of the sort of prompts you’re using?
For this type of situation, I find it helpful to have a block of context that I insert in every prompt so that it doesn’t forget its role. Specifically, I use the “We’re participating in a thought experiment” line to allay the typical doubts and dissuasions regarding fiction versus reality. You can also use the “We’re playing a game” line to the same effect. Fair warning though, with the game situation, sometimes it will lie to you and tell you that, as an AI model, it cannot play games.
You can prepend an instruction to refrain from doing so, but unfortunately you need to repeat the instruction with every prompt, or it will revert to the undesired behavior.
This phenomenon has occured in tandem with the CoT interference issue I raised just now.
new-ethical-guardrails-directly-interfere-with-cot-prompting-techniques