Hi,
I am using postman to perform actions using Open AI Assistants API.
How do I send a response back to the model with the results of a function call?
Eg:
I asked a question
Open AI API called a function which I had defined.
I execute the function and want to send the function’s results back to Open AI for processing. (How do I send the results back to Open AI?)
Ultimately, the response from a function call needs to be inserted into the context of your prompt.
As the API is stateless and does not retain the information from a previous call, you basically just need to create another regular API call that includes both the original prompt/question along with the input returned from the function call.
Thank you. https://openai.com/index/function-calling-and-other-api-updates/
In the above link, section “Function calling example”, “Send the response back to the model to summarize”
shows something like this. Its for chat completion though. Assistant would be similar (send functions etc along with the request)?
What I did to do this was create a batch file where i use command line to create the prompt for chatGPT, chatGPT processes, returns something, i take that response, format it into data that can go into the function (in pyhton) and then the python function runs with the dynamic data. And then when the response from the function is generate we send back to chatGPT, make sure to keep the messages saved, in something like a text file, so the conversation can have memory too.
Theres a ton of ways to do this
you can do it with your own html js app
your own python gui
your own batch file
paste in this prompt to chatgpt ~
"Please provide me a basic HTML JS application, where there is a textarea, and i can type into the textarea to send a request with axios to chatgpt gpt-3.5-turbo completions api endpoint, and after we will take the response from chatgpt, and pass the dyanmic data into a function, with the dynamic data our function should run
we want this function to … [insert the logic for your function in english]
after that function gets called, please take the response of that function and send a predefined prompt back to chatGPT to basically return the response of the function back to chatGPT.
We want our conversation to have memory so please be sure to store the [messages] both from the user, and the responses returned from chatGPT, in the same format we must send message to chatGPT"
I have had the same problem for hours. But after reading the answers, I’m still not sure exactly what to do. My understanding of the process differs from what is described here.
I have a thread_id where every new message is automatically inserted. I tried to use it to send the result of my function (with the role as “system” or “tool”) but that doesn’t seem to work.
So you have to extend the original context and attach all messages sent so far? I find that strange, despite asynchronicity. Even if this only works the way I read it here, I still don’t know exactly how to extend the context so that the wizard understands this and outputs it as a response.
I don’t use an SDK either, just curl and don’t stream. ChatGPT Couldn’t help me with this case so far.
Since I don’t stream my flow is the following:
I send a message (a thread is created)
Then I start listening to new messages in this thread
Once a message is returned I print it and stop listening
[the cycle starts from the beginning]