I am curious as why I need to submit tool outputs? I have the function selected by the Assistant and then I just call my function. Why do I need to give information about what is going on in my systems after that?
Can I just ignore it, I know the run get’s in a required_action status, should I just start new runs and not care about it? Whats the harm in that?
Yeah after I returned, I saw that it gave me the same answer, but in natural language. I was being shortsighted, I will need this for chaining assistants
Assistants requires you to host your own backend as it doesn’t support making requests directly on the internet, let’s say you have a calculator tool and you added it as a function to your assistant. If a user asks the assistant what is 100+100, the response of the assistant will be as an example:
I have little coding knowledge and the easiest way for me to create such backend was using FastAPI hosted in cloud and used ChatGPT4 to create the APIS.
You will need to avoid the anti-developer guardrails of assistants if you want to do some logical things with functions, such as simply placing outputs that have been formatted in the UI (like fortune-of-the-day) and then awaiting the next user question.
This is a dilemma at the moment. Indeed, “funntion calling” with assitants require a mandatory response by submit tools output. One could not create another run as long as the latter not completed. One may, of course, create a new thread, what may not make sense. For some applications where the function arguments deliver formatted quasi-response, packaged in the arguments, no output might be there, I would wish the possibility of a flag that could set the function call to a void-function call. — There is, however, the possibility in some cases to just return the argument of the function as output (dummy) to force complete. This, however, in such application developments may cause real problems and difficult to deal with. I gave up on my application because I had a recursive method, and the function call, if not void, resulted in chaos. In such cases, my suggestion would be to just try to instruct instead of package information out via function calling. For me, this worked well. I hope however a flag will once be invented.
But why would you add a function as something that you want the assistant to use when needed and then you want the assistant to not call it? You might as well have two different assistants, one with and one without functions and send both the same question - again I’m curious to understand why you would add a function and then not run it.
Good question. For example, in the rather trivial case, when I build an assistant as a persistent instance to do a job by instructions, instead of repeated prompting of the same instructions via ChatCompletion, while wanting to use function calling to get precise JSON format output by argument. This is an example of why I may need void-function calling.
But in both cases it would not be a ‘void’ right - you would either return the next instructions or the JSON version? Or do you use the INPUT that Openai gives you to the function but then have no further OUTPUT back to openai?
I am trying to understand. For example for JSON. (You can also have Openai provide the result in detailed JSON, defining attributes etc) how would that be a void function?
We must talk about novel applications of “functions” and function-calling that you can do with chatCompletions. Not with an assistant, which is sitting there waiting to accept a function response and then write a reply to the user.
One doesn’t necessarily need a “void” function. If the AI doesn’t need to write any special language, but only needs to trigger the function, then you can put a null parameter instead of one that requires the AI write a query to save the AI some work. In your code, you can recognize these response-less actions, simply do what the function needs done, and await more user input.
A cute example of this might be a bot function for your graphic interface, where the user can type something about the future and the AI will “roll the magic 8-ball”, and emitting the function will print an image of an 8-ball fortune teller with its random answer.
Example in use: "will OpenAI ever unlock assistants so we can place any role messages we want in a thread, control the length and cost of a thread, and terminate the run by function call response that produces no user output?
In such a case, the assistant calls a void function, and the information we gather is packed directly in the argument. This is the information we use. We do not use the assistant to communicate directly with the user. The assistant does a persistent repeating job for my program. This is a different concept than the common usage concept of the assistant., rather trivial. The program is both a channel to the user and holds the function to be called.
I use assistants for that same purpose (only) as well. But I feel that the PROMPT is what triggers the Assistant to RESPOND. Then you issue a new PROMPT.
It seems that you are using the Functions to try to insert the next PROMPT?
After status COMPLETED you can add to the THREAD and create a new RUN. The run is not meant to be kept alive I think, the thread is.
Function calling is an n excellent tool to add on prompts just for the part of output format fixation. You can prompt the engineer directly in your prompt that the model should deliver the results in a certain format, but can you be 100% sure that this will always, happen? However, if you use the function calling tool, the model will precisely deliver in that format of the function calling JSON. This can also be useful in case, when you use an assistant to just outsource persistent information from your prompt, then the function returns perfect format with no risk, while the response will not be still risky in this regard. The message format of the assistant is itself secure, but not how it structures the values. It will be just another way to use assistants and extend their application.
Even in the case of a void function it makes sense to return “something”. Even if you send a POST request you still expect a status code to inform you that the information was successfully received and processed as expected.
I also have a web application that uses function calling to manipulate the user’s profile. So returning a “Inform the user that everything was successful”, or with proper error handling (which you should have if you are receiving unsafe parameters from GPT) is necessary
Well - you get to required_action because you asked OpenAI to answer your prompt, but it felt the need to call your function (probably because of what was in the prompt, the functions (and their templates) provided and the assistant instructions. So it should not be a surprise? Maybe you can share a little bit more details
A return value is another API call with context token fees to unnecessarily have the AI print to the user “hope you like it!” after it has done something like emit an output you display in GUI that alone satisfies the user.
I get ya, but I don’t think that it is necessary to send back to openai for an an acknowledgement of the function that YOU ran was successful. After getting the function choice, you execute, if it fails you can return to the end user a notification, and not openai, and just run another thread.
Honestly, Assitants is cool, but I am thinking that a pretty decent prompt and JSON mode can make a simple recommender system for function calling without even using any of the function calling apis. If the context windows become big enough, what happens to all these APIs?
But the reason the function is CALLED is because OpenAI NEEDS something. Its not a tool create your flow. Your code is responsible for creating (and maintaining a flow). So you instruct your Assistant to call a function to get some specific information and then you decide not to give it?
This philosophy doesn’t make sense. You are assuming that the response is static and guaranteed to work. There are multiple issues that can arise.
In any sort of interface you always expect at the very least some sort of response. This is fundamental to APIs. Because GPT is acting as a middleman you need to pass the result back. Or else, what? You’re going to push some notification with the error so the user can be notified and then the Assistant is broken waiting for a tool output?
I don’t understand. If you want to run GPT as a one-off to create a function and prefer to find out the results through logging then you can just use ChatCompletions and are greatly limiting yourself for no reason.
I’m not sure I fully understand this either. Your functions are always attached to the model each run.
If I am understanding correctly you have your own private “Assistant” that you want to run functions for your server to execute. You don’t care to return the output results because… You will just destroy the conversation and start fresh?