If you want to run GPT as a one-off to create a function and prefer to find out the results through logging and are greatly limiting yourself for no reason
there are plenty of use cases why you would want to only use a recommender system and many more to not use an LLM for extra unnecessary processing. We would want to use logging to keep costs down rather than all the extra cost of tokenizing and inferencing of a LLM to respond to a user naturally when we just need the resulting data for a page. Just logging the results is more cost efficient. But that isnât the only reason for this.
Also we dont need to use logging at all, just simply return the results to the user.
def ask_assistant(request):
"""
Processes the request by selecting and running the appropriate tools,
then renders the results without notifying OpenAI about the outcome.
Args:
request: The request object containing the necessary information.
Returns:
HTML content rendered with the results of the tools execution.
"""
# Select tools based on the request. This function is defined elsewhere.
tools_to_run = assistant.select_tools_to_run_from_request(request)
try:
# Run the selected tools and capture the results.
results = assistant.run_tools(tools_to_run)
except Exception as e:
# Handle any exceptions that might occur during tool execution.
# Consider logging the error or rendering an error message.
return render_html("error_page", {"error": str(e)})
# The results are returned as HTML without updating OpenAI.
# The line to inform OpenAI is intentionally omitted to maintain independence.
return render_html("display_results", results)
If I can put everything about a codebase in single API call, give the AI a prompt to select a function to perform when it responds and the sequence to achieve some outcome. Then I donât really need the Function calling system since I can achieve the same outcome with a more detailed prompt.
I wouldnât need Tools for an Assistant, I donât even have to build the recommender system, just a better prompt.
1 Like
If I understand this code correctly you are running GPT in the front-end to transform an unstructured query into a function, so that you can return some sort of HTML results. (Based off render_html)
Again, if you want to use it once and discard it youâre better off using ChatCompletions. It seems like you are ALWAYS expecting some sort of function so yes, JSON mode would make more sense.
Thatâs a pretty big if.
Letâs also consider what happens if an error is thrown. Which, I have no idea what you are actually doing with this information ⌠dynamically generating/constructing HTML?
If an error is thrown it would be nice to able to work it out by having GPT return it so you can modify your query and re-send. In your case you are basically forced to restart the whole process.
Also, why arenât you using recommendation AI? Iâm not saying what youâre doing is wrong. Iâm just trying to understand what you are trying to do here.
Again, if you want to use it once and discard it youâre better off using ChatCompletions. It seems like you are ALWAYS expecting some sort of function so yes, JSON mode would make more sense.
Yeah I am just saying that is one way. I am building an AI based web framework where the assistants own their own micro-app and build out the system one at a time.
So a lot of the time I just needed the functions called and thatâs it. But I wanted to extend the framework to have each assistant be able to communicate with each other, I didnât need the end NLP result at the time since the other assistants can see it more effectively in the same repo.
Letâs also consider what happens if an error is thrown.
That would only happen on openAIs side for the actual choice.
If an error is thrown it would be nice to able to work it out by having GPT return it so you can modify your query and re-send
If an error happens it is a problem with the code, the actual function and the AI would have to fix itself for this to be a better option than the log in this situation IMO. But this makes sense for an end user.
So I want assistants, assistants donât need to communicate with each other using NLP, may change that, but need to be able to have that for the end user.
Also, why arenât you using recommendation AI?
What do you mean by this? I am open to ideas.
I am working on this project, havenât updated the readme fully, been busy
1 Like
Cool idea.
I guess my main issue is that once you decide to NOT submit your tool outputs you have limited yourself to only a single function and cannot continue a conversation after the function was made (unless you waited for the expiry). This would be counter-intuitive to a conversation as a user would expect GPT to know what it did.
Hereâs a scenario in my head:
- User asks for code
- Code is generated
- User asks about a certain line of code, or asks for slight adjustment
- What do?
Or
- User asks for code
- Error is thrown and error page
- User asks GPT for advice/help because GPT is essentially the driver
- GPT says WTF are you talking about
In a typical scenario you would return the generated HTML to the GPT model so it knows what the user is talking about. Otherwise, you would need to call a new function, breaking the seamlessâŚness(?) of what Assistants are supposed to accomplish.
Admittedly I didnât read through your code very well and and unsure of how it all works beneath the hood.