Function Calling Help - Model Doesn't Seem To Accept Function Prompt?

I tried a variation of the example provided here and it worked:

Navigate to: Send the response back to the model to summarize

Here is their example:

curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"},
    {"role": "assistant", "content": null, "function_call": {"name": "get_current_weather", "arguments": "{ \"location\": \"Boston, MA\"}"}},
    {"role": "function", "name": "get_current_weather", "content": "{\"temperature\": "22", \"unit\": \"celsius\", \"description\": \"Sunny\"}"}
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  ]
}'

I used gpt-3.5-turbo-0125 ; on my example, I specifically added a system message giving a role to the assistant and instructing it to output valid JSON.

Note that the assistant message to give GPT the context of the previous response has a "content": null and it has a function_call property that provides both the function’s name and arguments; I used the same name provided on the function spec and gave it the arguments a json.dumps() of the response I had received from the previous function.