Function calls getting no responses

I am testing out functions functionality. the function.

[{'name': 'set_keywords',
  'description': 'Save the openai keywords of the chat to the db',
  'parameters': {'type': 'object',
   'properties': {'keywords': {'type': 'array',
     'items': {'type': 'string'},
     'description': 'list of keywords generated by conversation in this chat.'}}}}]

The messages

[{'role': 'system', 'content': 'You are a helpful but funny assistant'},
 {'role': 'user', 'content': 'tell me a joke'}]

The call

 resp = openai.ChatCompletion.create(
    ...:     model='gpt-3.5-turbo-0613',
    ...:     messages=messages,
    ...:     functions=functions,
    ...:     )

And the response:

<OpenAIObject chat.completion id=chatcmpl-7VRip4BiLmYJJUJLl44aBNWa3Ul1D at 0x1236e4040> JSON: {
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!",
        "role": "assistant"
      }
    }
  ],
  "created": 1687729087,
  "id": "chatcmpl-7VRip4BiLmYJJUJLl44aBNWa3Ul1D",
  "model": "gpt-3.5-turbo-0613",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 24,
    "prompt_tokens": 74,
    "total_tokens": 98
  }
}

What am I doing wrong?

Try to remove the system prompt. From my testing, if you are going to add a system prompt in function call, it should be related to the function call (e.g. “Don’t make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.”). You should only add the actual system prompt at the final Chat Completion call to summarize the result.

Are you using GPT-3.5? If so this is further evidence that there’s no magic going on. You can’t always count on the model to respond with JSON. I owe an update to AlphaWave this week that add an extra level of reliability to using functions:

I see in the response that this is 3.5 so just turbo ignoring it’s instructions to reply with JSON. I can fix that

That would defeat the purpose. I want to update the keywords of the chat with respect to every new prompt and response. I guess I will just use a separate call and update the db that way.

I tried your example several times using your own function and editing it. But somehow “tell me a joke” will trigger the API to tell a joke. Otherwise, it works as expected with varying results.

What is more peculiar is that I get the same joke every time.