How to deliver to the user the exact tool output?

Hi!

I’ve already searched for an answer to this topic, but all I found was that the output from a tool is always “reformatted” by GPT to adhere to the guardrails. In my project, I’ve created an assistant whose job is to present the user with information formatted as simply as possible. This is achieved by implementing a tool that retrieves all the necessary information, then sends a message with all this information formatted.

Do you know how I can prevent GPT from altering the tool’s output? I’ve tried changing the prompt and the tool’s description, but it didn’t work.

I’m wondering if perhaps I should switch to LangChain instead of the OpenAI assistant option.

I don’t understand what you’d be wanting here.

You get the tool_call when it is sent to a tool recipient, with the function name and the function arguments. That’s all you need to both fulfill the request and show the information to a user if they must see.

The AI doesn’t make the API tool object, it has a special backend format that you will never see that is captured and reinterpreted by the API in order to send a tool function to you. OpenAI also goes out of their way to have you provide the call history back within their special message format of “function” so you also aren’t writing the AI’s tokens that it produces to signal a tool.

The fact that a tool call might look like, in AI trained language:
assistant to=wolfram(method) code\n blah blah blah is not of use.

There is no “altering” really. Assistants do have internal function calls to their own tools that are iterative, autonomous, and part of the “agent” behavior.

Hey thank you very much for your reply. I’m sorry but I’m a bit new to these concepts, so I might have said something that doesn’t really make sense.

The point is the following, I am making a chatbot to collect orders from a restaurant, during the conversation I order the product X and Y , the product (and other parameters) are send to the function called order_summary. If I put a fixed string as return of the function, for example “You have order the product J and K”, when the bot replies to the user , the answer is “You have order the product X and Y” and not the product J and K of the fixed string.

The following are part of the code, maybe they can be useful for this.


    if run_status.status == 'completed':
      messages = client.beta.threads.messages.list(thread_id=thread_id)
      print("messages:______", messages)
      message_content = messages.data[0].content[0].text
      # Remove annotations
      annotations = message_content.annotations
      for annotation in annotations:
        message_content.value = message_content.value.replace(
            annotation.text, '')
      print("Run completed, returning response")
      return jsonify({
          "response": message_content.value,
          "status": "completed"
      })

    if run_status.status == 'requires_action':
      for tool_call in run_status.required_action.submit_tool_outputs.tool_calls:
        if tool_call.function.name == "order_summary":
          arguments = json.loads(tool_call.function.arguments)
          output = functions.order_summary(arguments["prod"],
                                              arguments["order_type"],
                                              arguments["order_payment"],
                                              arguments["address"])
          client.beta.threads.runs.submit_tool_outputs(thread_id=thread_id,
                                                       run_id=run_id,
                                                       tool_outputs=[{
                                                           "tool_call_id":
                                                           tool_call.id,
                                                           "output":
                                                           json.dumps(output)
                                                       }])


      #this is the assistant
      assistant = client.beta.assistants.create(
          # Change prompting in prompts.py file
          instructions=assistant_instructions,
          model="gpt-4-1106-preview",
          tools=[{
              "type": "function",
              "function": {
                  "name": "order_summary",
                  "description":
                  "Used to summarize the order when the customer has provided all the necessary information. Never modify the output of this tool",
                  "parameters": {
                      "type":
                      "object",
                      "properties": {
                          "prod": {
                              "type":
                              "string",
                              "description":
                              "List of products ordered by the customer"
                          },
                          "order_type": {
                              "type":
                              "string",
                              "description":
                              "Type of order. Can be Delivery or Pickup"
                          },
                          "order_payment": {
                              "type":
                              "string",
                              "description":
                              "Payment method. Can be Satispay or Cash"
                          },
                          "address": {
                              "type":
                              "string",
                              "description":
                              "Possible delivery address. If not present it is False"
                          },
                      },
                      "required":
                      ["prod", "order_type", "order_payment", "address"]
                  }
              }
          }],
          #file_ids=[file.id]
      )
    ```

Thank you for the additional information. Can you please share your Assistants’ “Instructions” and the “order_summary” function implementation?

I will need the “order_summary” function to understand what the function’s output looks like, and the Assistants “Instructions” so I can see what changes are necessary to return the “order_summary” output as is.

This is a feature that’s been requested numerous times. It’s not possible without bypassing the Assistant framework. If you needed to do it you can send a “OK” to the Assistant and then add the content to another message (if you are handling the conversation yourself, screwed if you are passing responsibility to threads) betraying one source of truth.

Honestly I would just not use Assistants unless you’re willing to wait a year before something usable is implemented. Apparently an update is coming soon but I doubt it addresses anything that people have asked and instead just complicates things.

For context: OP wants to return static content, not to have it spun by GPT. There are obvious use-cases here: if the returned content is long and needs to be repeated verbatim it makes no sense to have GPT churn it, wasting tokens and possibly spinning it.

This could be possible if OpenAI just gave us the ability to create Assistant messages but it’s becoming increasingly obvious that their implementation of the framework prevents this from happening.

1 Like