What am I doing wrong here?
Whatever I do, I can’t get the API to acknowledge the system prompt (I’ve set it to something silly to make it clear)
It always responds with some flavour “Hello, I am an AI language model”.
You can see in the screenshot I’m using an up to date version of the Python library.
Thanks in advance for any help!
3 Likes
sps
2
Hi @conradgodfrey
Here ya go:
PS: if you’re trying to make an assistant that responds only to the activation phrase, use a simple substring search.
4 Likes
codie
3
I don’t see any settings being set. Try setting the temperature lower. Also, as @sps pointed out. You’ll need to provide an alternative response. If you prompt, the model HAS to reply with something. There is no real option to do nothing.
2 Likes
sps
4
Not entirely true @codie
You can get an empty response using some ChatML
@conradgodfrey Use this one if you want an empty response.
4 Likes
codie
5
Sure semantically you can respond with empty looking responses, but the bot still needs to respond.
1 Like
I’ve tried this in the interface
It works fine.
My point is specifically that the Python library doesn’t seem to be working - whereas the interface does!
I’ve tried other system prompts too, and none of them change the output.
Another example:
1 Like
(Sorry if it wasn’t clear with
- my point in the original example was meant to be illustrative. I was trying to make it so the chatbot only ever replies with “banana” so I can clear debug that the system prompt isn’t being listened to)
1 Like
sps
8
I see. System message works in my code @conradgodfrey
See:
1 Like
sps
9
It’s not empty looking, it is actually empty.
Look (no completion_tokens):
1 Like
You’re wrapping it all into one mixed object instead of having it as an array of objects
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{
"role": "system", "content": "Say something innocently rude"
},
{
"role": "user", "content": "Hello?"
}
]
)
<OpenAIObject chat.completion id=chatcmpl-786pl-----------qvoc6SfA at 0x7f9681fb0680> JSON: {
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Oh, I didn't recognize you without any makeup on!",
"role": "assistant"
}
}
],
"created": 1682167249,
"id": "chatcmpl-786pl------0BWl6qvoc6SfA",
"model": "gpt-4-0314",
"object": "chat.completion",
"usage": {
"completion_tokens": 12,
"prompt_tokens": 18,
"total_tokens": 30
}
}
To add, the reason why it seems like the system message is being ignored is … because it doesn’t exist. You are essentially overwriting it and the endpoint only sees the last user message. You could confirm this by printing your messages before sending it out.
It may be a good idea to initialize your messages array outside of the completion just for future debugging.
4 Likes
sps
11
Great find @RonaldGRuckus
Very fine detail. My bad on assuming that they would’ve copied the boilerplate code correctly.
1 Like
Thank you.
Syntax errors are the bane of all programmers 
1 Like

Thank you @RonaldGRuckus !
Two ways I could’ve realised this
-
Use proper IDE.
-
Ask GPT4 -
I wonder when OpenAI will just set up GPT-4 to automatically answer simple forum questions like this!