Sending response back to the model after API call

Hi,
I am using postman to perform actions using Open AI Assistants API.
How do I send a response back to the model with the results of a function call?

Eg:

  1. I asked a question
  2. Open AI API called a function which I had defined.
  3. I execute the function and want to send the function’s results back to Open AI for processing. (How do I send the results back to Open AI?)

I am using Postman or CURL not Python or Node.

Thank you.

Ultimately, the response from a function call needs to be inserted into the context of your prompt.

As the API is stateless and does not retain the information from a previous call, you basically just need to create another regular API call that includes both the original prompt/question along with the input returned from the function call.

Hope that helps!

1 Like

Thank you.
https://openai.com/index/function-calling-and-other-api-updates/
In the above link, section “Function calling example”, “Send the response back to the model to summarize”
shows something like this. Its for chat completion though. Assistant would be similar (send functions etc along with the request)?

What I did to do this was create a batch file where i use command line to create the prompt for chatGPT, chatGPT processes, returns something, i take that response, format it into data that can go into the function (in pyhton) and then the python function runs with the dynamic data. And then when the response from the function is generate we send back to chatGPT, make sure to keep the messages saved, in something like a text file, so the conversation can have memory too.

Theres a ton of ways to do this

you can do it with your own html js app
your own python gui
your own batch file

paste in this prompt to chatgpt ~

"Please provide me a basic HTML JS application, where there is a textarea, and i can type into the textarea to send a request with axios to chatgpt gpt-3.5-turbo completions api endpoint, and after we will take the response from chatgpt, and pass the dyanmic data into a function, with the dynamic data our function should run

we want this function to … [insert the logic for your function in english]

after that function gets called, please take the response of that function and send a predefined prompt back to chatGPT to basically return the response of the function back to chatGPT.

We want our conversation to have memory so please be sure to store the [messages] both from the user, and the responses returned from chatGPT, in the same format we must send message to chatGPT"

1 Like

I have had the same problem for hours. But after reading the answers, I’m still not sure exactly what to do. My understanding of the process differs from what is described here.
I have a thread_id where every new message is automatically inserted. I tried to use it to send the result of my function (with the role as “system” or “tool”) but that doesn’t seem to work.

So you have to extend the original context and attach all messages sent so far? I find that strange, despite asynchronicity. Even if this only works the way I read it here, I still don’t know exactly how to extend the context so that the wizard understands this and outputs it as a response.
I don’t use an SDK either, just curl and don’t stream. ChatGPT Couldn’t help me with this case so far.

Since I don’t stream my flow is the following:
I send a message (a thread is created)
Then I start listening to new messages in this thread
Once a message is returned I print it and stop listening
[the cycle starts from the beginning]

This topic doesn’t have any clarity yet.

We discuss the specific case of:

  1. Assistants endpoint, non-streaming responses
  2. Developer-provided functions, specified in the assistant creation.
  3. You instantiate a run, with a thread that has messages, against an assistant
  4. The AI wants to use that function, and thus sends its response to the function

The failing in the approach of previous replies:

  • You do not monitor the messages to simply see if a new message was produced, instead
  • You continue to poll for the status of a run, which is where tool invocation is produced.

The return object from run invocation and run polling has two important fields:

  • status (string)

    • The status of the run, which can be either: queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, expired.
  • required_action (object or null)

    • Details on the action required to continue the run. Will be null if no action is required.
      • type (string)
        • For now, this is always submit_tool_outputs.
      • submit_tool_outputs (object)
        • Details on the tool outputs needed for this run to continue.
          • tool_calls (array)
            • A list of the relevant tool calls.
              • id (string)
                • The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.
              • type (string)
                • The type of tool call the output is required for. For now, this is always function.
              • function (object)
                • The function definition.
                  • name (string)
                    • The name of the function.
                  • arguments (string)
                    • The arguments that the model expects you to pass to the function.

You will see that the actual tool call language and ID are in the run object that you poll for. You do not obtain this from messages.

You can then follow the API documentation of submitting your function results.

https://platform.openai.com/docs/api-reference/runs/submitToolOutputs

2 Likes

Thanks, the “submit tool outputs to run” was exactly what I was looking for.

1 Like