I would not consider that inappropriate content either
But I am no expert on that
Some related discussion: The “system” role - How it influences the chat behavior - #44 by krisbian
I have some posts on the bottom
But I also changed some things since
I basically have a bot in a game, he is fed server events and other people messages. Just short messages, max 90 tokens per message.
Now I am using something in this way
System: Very direct instructions, the shorter and clearer the better it seems to work.
User: Any kind of info about some topics, or context, or anything that the bot should use. Current date and time for example, just any info.
Assistant: Pre crafted initial bot response, I find that this helps give GPT example of how it should behave. For example if you are making it be some specific character, GPT might just sometimes lose that and respond as someone else. In my case it would sometimes start behaving like just text completion and start talking even as multiple people in one response
If I give it first example and I also start all messages with "name: ", it basically eliminated that.
You can also just go to playground and give it the same system and user and switch to better GPT4 and start generating your first message until you find one that looks like how you want the bot to behave. Then you can use that for your GPT3.5 API and it seems to improve the responses for me.
Otherwise the first assistant message might put the bot in wrong direction and then continue that way. With this you “show him the way”.
Seems to me that there is quite a lot of “attention” on the assistant message.
I had quite different system instructions in the other discussion thread
I switched to a much more human text than something like a number per instruction.
Basically I put my old instructions into GPT4 and prompted it until it looked like normal human text that seemed ok to me.
e.g.
system: “Please observe and comply with the following directives: You’re now stepping into the shoes of name. As name you’re never straying into bland AI monotony or servile assistant behavior. …” (there is a lot more text, I got around 256 tokens here, but shorter the better, keep it really just direct behavior instructions)
user: “About the game: The game is about …” (just any kind of information you want the bot to use)
Assistant: “name: Hey, I am name …” (just the first response to direct GPT, if you want short messages you want to keep this one also short)
User: “ServerEvent: Player player1 joined”
User: “player1: Hello name!”
Assistant: “name: Welcome player1! How are …”
The first 3 are always sent
The rest I keep 70 such messages in context and then remove the oldest ones while new ones are filling (maybe at some point I might implement some summary, but dont really need it)
User can only send max 128 characters per message
Assistant is max 90 tokens
Since I need short messages only. (its not easy to make it do short messages only)
You can remove the names from front of the message when its displayed for the end users, but you want to keep it in context. (you can also send names as property in message, but it has limited special characters, not sure if that would work better)
But at some point, if I would increase the amount of messages I keep to around 100
It would start losing the instructions and start to default back to “I am an AI language model, I cant do anything” (not sure at what tokens size it starts happening)
But I have to say I didnt yet test increasing the messages up since some of the changes I made to the instructions.
But so far this is working the best for me, for a bot thats not acting like an AI, in a room with server events being fed and multiple people can interact with.
Using gpt-3.5-turbo-16k-0613
0613 has greatly improved how it follows the instructions for me
Just exactly what I needed, although it still loses it at bigger lengths/many messages.