Trying to CURL the response to a tool call

I have a test function that the LLM is invoking, this is using 3.5 turbo with the chat endpoint (not assistants), first response seems OK (test function with 8 parameters filled):

        "tool_calls": [
          {
            "id": "call_TreYrlgHJ8nhNNqiv1IaeRwb",
            "type": "function",
            "function": {
              "name": "RemoteProcedureCall",
              "arguments": "{\"functionname\":\"TestFunction\",\"parameter1\":\"1\",\"parameter2\":\"2\",\"parameter3\":\"3\",\"parameter4\":\"4\",\"parameter5\":\"5\",\"parameter6\":\"6\",\"parameter7\":\"7\",\"parameter8\":\"8\"}"
            }
          }
        ]

So I reply to this, the only thing added in the 2nd request is a tool message:

    {
      "role": "tool",
      "tool_call_id": "call_TreYrlgHJ8nhNNqiv1IaeRwb",
      "content": "TestFunction() successful\n"
    }

I end up with a 400 error here but I am not sure why.

Hi @winnenterprises

400 means an invalid request. Sharing your code will be helpful.

I have previously solved a similar problem faced by another dev:

1 Like

I’m using CURL equivalent in C#. I’m sure the headers are good otherwise the first request would have failed. So, it must be something about the JSON it doesn’t like. After removing “tools” per the other thread and also “logprobs”, “top_logprobs”, and “tool_choice” because they aren’t relevant to the follow up request, I have (complete JSON being sent):

{
  "model": "gpt-3.5-turbo-0125",
  "messages": [
    {
      "role": "system",
      "content": "RemoteProcedureCall(GetFunctionIndex) will get a list of all available functions in your host program"
    },
    {
      "role": "user",
      "content": "Test the RPC functionality by calling TestFunction(1, 2, 3, 4, 5, 6, 7, 8)"
    },
    {
      "role": "tool",
      "tool_call_id": "call_MbdZlfOW6tdgfCxLqSwcsyM7",
      "name": "RemoteProcedureCall",
      "content": "TestFunction() successful\n"
    }
  ]
}

The way I am making the request seems to be OK, I just need to know what it wants to see (or not see) in the JSON content (at least, that seems like the most likely problem). If you have a sample of raw JSON from a successful request on this step, it would probably be obvious what’s wrong with mine.

Pastebin of the C# module code

Here’s a snippet from the OpenAI Cookbook showing how the JSON content for the message with function call response should look like.

2 Likes

Found the answer in https://platform.openai.com/docs/guides/function-calling?lang=python after staring at it for awhile, plus the other source you gave me. I have the entire request/response dump below. I had to do the following:

  1. Add the assistant message to the message list, even though the content=null. Apparently it has to have a record of its tool call message in the message list in order to work.

  2. Make sure the next message added after that has tool_call_id, role, name, and content and they are set correctly (role=tool, for example).

  3. Second call needs messages + model and only these.

{
  "model": "gpt-3.5-turbo-0125",
  "messages": [
    {
      "role": "system",
      "content": "RemoteProcedureCall(GetFunctionIndex) will get a list of all available functions in your host program"
    },
    {
      "role": "user",
      "content": "Test the RPC functionality by calling TestFunction(1, 2, 3, 4, 5, 6, 7, 8)"
    }
  ],
  "logprobs": false,
  "top_logprobs": null,
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "RemoteProcedureCall",
        "description": "Invoke a function (by name) in the program hosting the large language model",
        "parameters": {
          "type": "object",
          "properties": {
            "functionname": {
              "type": "string",
              "description": "Name of function to invoke"
            },
            "parameter1": {
              "type": "string",
              "description": "parameter 1"
            },
            "parameter2": {
              "type": "string",
              "description": "parameter 2"
            },
            "parameter3": {
              "type": "string",
              "description": "parameter 3"
            },
            "parameter4": {
              "type": "string",
              "description": "parameter 4"
            },
            "parameter5": {
              "type": "string",
              "description": "parameter 5"
            },
            "parameter6": {
              "type": "string",
              "description": "parameter 6"
            },
            "parameter7": {
              "type": "string",
              "description": "parameter 7"
            },
            "parameter8": {
              "type": "string",
              "description": "parameter 8"
            }
          },
          "required": [
            "functionname"
          ]
        }
      }
    }
  ],
  "tool_choice": "auto"
}
{
  "id": "chatcmpl-8rxNm0IVa8n9ybNsnTErBa8P4qP6Y",
  "object": "chat.completion",
  "created": 1707870342,
  "model": "gpt-3.5-turbo-0125",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_Vekr09q3HDjy6jOriHKP1boq",
            "type": "function",
            "function": {
              "name": "RemoteProcedureCall",
              "arguments": "{\"functionname\":\"TestFunction\",\"parameter1\":\"1\",\"parameter2\":\"2\",\"parameter3\":\"3\",\"parameter4\":\"4\",\"parameter5\":\"5\",\"parameter6\":\"6\",\"parameter7\":\"7\",\"parameter8\":\"8\"}"
            }
          }
        ]
      },
      "logprobs": null,
      "finish_reason": "tool_calls"
    }
  ],
  "usage": {
    "prompt_tokens": 196,
    "completion_tokens": 57,
    "total_tokens": 253
  },
  "system_fingerprint": "fp_9fdfea22a2"
}

{
  "model": "gpt-3.5-turbo-0125",
  "messages": [
    {
      "role": "system",
      "content": "RemoteProcedureCall(GetFunctionIndex) will get a list of all available functions in your host program"
    },
    {
      "role": "user",
      "content": "Test the RPC functionality by calling TestFunction(1, 2, 3, 4, 5, 6, 7, 8)"
    },
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        {
          "id": "call_Vekr09q3HDjy6jOriHKP1boq",
          "type": "function",
          "function": {
            "name": "RemoteProcedureCall",
            "arguments": "{\"functionname\":\"TestFunction\",\"parameter1\":\"1\",\"parameter2\":\"2\",\"parameter3\":\"3\",\"parameter4\":\"4\",\"parameter5\":\"5\",\"parameter6\":\"6\",\"parameter7\":\"7\",\"parameter8\":\"8\"}"
          }
        }
      ]
    },
    {
      "tool_call_id": "call_Vekr09q3HDjy6jOriHKP1boq",
      "role": "tool",
      "name": "RemoteProcedureCall",
      "content": "Function call was successful. Diagnostic data: call_Vekr09q3HDjy6jOriHKP1boq"
    }
  ]
}
{
  "id": "chatcmpl-8rxNozVb1bdQ7O2QYCOkMQ6DfK9gp",
  "object": "chat.completion",
  "created": 1707870344,
  "model": "gpt-3.5-turbo-0125",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The TestFunction(1, 2, 3, 4, 5, 6, 7, 8) function was called successfully. The diagnostic data for this call is: call_Vekr09q3HDjy6jOriHKP1boq. If you need further assistance or information, feel free to ask!"
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 155,
    "completion_tokens": 73,
    "total_tokens": 228
  },
  "system_fingerprint": "fp_9fdfea22a2"
}