Why submit tool outputs for Function Calling?

If you want to run GPT as a one-off to create a function and prefer to find out the results through logging and are greatly limiting yourself for no reason

there are plenty of use cases why you would want to only use a recommender system and many more to not use an LLM for extra unnecessary processing. We would want to use logging to keep costs down rather than all the extra cost of tokenizing and inferencing of a LLM to respond to a user naturally when we just need the resulting data for a page. Just logging the results is more cost efficient. But that isn’t the only reason for this.

Also we dont need to use logging at all, just simply return the results to the user.

def ask_assistant(request):
    """
    Processes the request by selecting and running the appropriate tools, 
    then renders the results without notifying OpenAI about the outcome.

    Args:
        request: The request object containing the necessary information.

    Returns:
        HTML content rendered with the results of the tools execution.
    """

    # Select tools based on the request. This function is defined elsewhere.
    tools_to_run = assistant.select_tools_to_run_from_request(request)

    try:
        # Run the selected tools and capture the results.
        results = assistant.run_tools(tools_to_run)
    except Exception as e:
        # Handle any exceptions that might occur during tool execution.
        # Consider logging the error or rendering an error message.
        return render_html("error_page", {"error": str(e)})

    # The results are returned as HTML without updating OpenAI.
    # The line to inform OpenAI is intentionally omitted to maintain independence.
    return render_html("display_results", results)

If I can put everything about a codebase in single API call, give the AI a prompt to select a function to perform when it responds and the sequence to achieve some outcome. Then I don’t really need the Function calling system since I can achieve the same outcome with a more detailed prompt.

I wouldn’t need Tools for an Assistant, I don’t even have to build the recommender system, just a better prompt.

1 Like

If I understand this code correctly you are running GPT in the front-end to transform an unstructured query into a function, so that you can return some sort of HTML results. (Based off render_html)

Again, if you want to use it once and discard it you’re better off using ChatCompletions. It seems like you are ALWAYS expecting some sort of function so yes, JSON mode would make more sense.

That’s a pretty big if.

Let’s also consider what happens if an error is thrown. Which, I have no idea what you are actually doing with this information … dynamically generating/constructing HTML?

If an error is thrown it would be nice to able to work it out by having GPT return it so you can modify your query and re-send. In your case you are basically forced to restart the whole process.

Also, why aren’t you using recommendation AI? I’m not saying what you’re doing is wrong. I’m just trying to understand what you are trying to do here.

Again, if you want to use it once and discard it you’re better off using ChatCompletions. It seems like you are ALWAYS expecting some sort of function so yes, JSON mode would make more sense.

Yeah I am just saying that is one way. I am building an AI based web framework where the assistants own their own micro-app and build out the system one at a time.

So a lot of the time I just needed the functions called and that’s it. But I wanted to extend the framework to have each assistant be able to communicate with each other, I didn’t need the end NLP result at the time since the other assistants can see it more effectively in the same repo.

Let’s also consider what happens if an error is thrown.

That would only happen on openAIs side for the actual choice.

If an error is thrown it would be nice to able to work it out by having GPT return it so you can modify your query and re-send

If an error happens it is a problem with the code, the actual function and the AI would have to fix itself for this to be a better option than the log in this situation IMO. But this makes sense for an end user.

So I want assistants, assistants don’t need to communicate with each other using NLP, may change that, but need to be able to have that for the end user.

Also, why aren’t you using recommendation AI?

What do you mean by this? I am open to ideas.

I am working on this project, haven’t updated the readme fully, been busy

1 Like

Cool idea.

I guess my main issue is that once you decide to NOT submit your tool outputs you have limited yourself to only a single function and cannot continue a conversation after the function was made (unless you waited for the expiry). This would be counter-intuitive to a conversation as a user would expect GPT to know what it did.

Here’s a scenario in my head:

  1. User asks for code
  2. Code is generated
  3. User asks about a certain line of code, or asks for slight adjustment
  4. What do?

Or

  1. User asks for code
  2. Error is thrown and error page
  3. User asks GPT for advice/help because GPT is essentially the driver
  4. GPT says WTF are you talking about

In a typical scenario you would return the generated HTML to the GPT model so it knows what the user is talking about. Otherwise, you would need to call a new function, breaking the seamless…ness(?) of what Assistants are supposed to accomplish.

Admittedly I didn’t read through your code very well and and unsure of how it all works beneath the hood.