Create_and_poll with function calling?

Hello! I’m currently using old-school manual polling of my runs. I’m not streaming, but I am using function calling. I’ve simplified this code, but here’s essentially what my loop looks like:

while True:

    run = self.retrieve_run(run_id=run.id)
    
    if run.status == 'completed':
        # fetch messages and return them
        break
    
    elif run.status == 'failed':
        # do stuff
        break
    
    elif run.status == 'requires_action':
        # run tools and submit output
    
    elif run.status == 'error':
        break

I’d love to offload the polling using:

run = client.beta.threads.runs.create_and_poll(
    thread_id=thread.id, assistant_id=assistant.id
)

But I don’t understand how to handle the tools calls. Would I call

tool_calls = run.required_action.submit_tool_outputs.tool_calls

after the run is completed instead of within the polling cycle?

Thanks!!

1 Like

@clone45 here is a snippet that worked for me. You need to run with create_and_poll() and you can handle submit with submit_tool_outputs_and_poll(). Hope that helps:

async def submit_message(assistant_id, thread_id, user_message):

  response_message = ""

  thread = None
  run = None  #variable to store the run ID
  try:

    if thread_id is None or thread_id == "":
      thread = await client.beta.threads.create()
      thread_id = thread.id

    # Creates the message
    await client.beta.threads.messages.create(thread_id=thread_id,
                                              role="user",
                                              content=user_message)

  
    run = await client.beta.threads.runs.create_and_poll(
        thread_id=thread_id, assistant_id=assistant_id, poll_interval_ms=2000)


    # RUN STATUS: REQUIRES ACTION
    if run.status == 'requires_action':
      # do stuff

      # Send the response back to the function calling tool
      run = await client.beta.threads.runs.submit_tool_outputs_and_poll(
          thread_id=run.thread_id,
          run_id=run.id,
          tool_outputs=[
            {
              "tool_call_id": tool_call_id,
              "output": response_message
            }
          ],
      )
      
      
    # RUN STATUS: COMPLETED
    if run.status == "completed":
      #do stuff
      response_message = await get_response(run.thread_id)

      return response_message

    # RUN STATUS: EXPIRED | FAILED | CANCELLED | INCOMPLETE
    if run.status in ['expired','failed','cancelled','incomplete']:
      #do stuff

  except Exception as e:
    # handle exception

Returns response

async def get_response(thread_id):
  messages = await client.beta.threads.messages.list(thread_id=thread_id)
  message_content = messages.data[0].content[0].text

  # Remove annotations
  annotations = message_content.annotations
  for annotation in annotations:
    message_content.value = message_content.value.replace(annotation.text, '')

  response_message = message_content.value
  return response_message
2 Likes

Hi, is there a way to define a limit on number of polls we would want to execute? I don’t like the while loop running infinitely (which is what they’re doing under the hood). Any way to set a timeout on this polling function or a quick workaround?

Hi Roccha,

Thanks so much for taking the time to reply. I appreciate your kindness. I haven’t tried your code yet, but I have a question about it. Let me focus on just these two parts:

run = await client.beta.threads.runs.create_and_poll(
    thread_id=thread_id, assistant_id=assistant_id, poll_interval_ms=2000)

if run.status == 'requires_action':
    run = await client.beta.threads.runs.submit_tool_outputs_and_poll(
    ...

if run.status == "completed":
    response_message = await get_response(run.thread_id)
    ...

Ahh… I may have answered my own question. Let me just talk it through out loud.

if run.status == 'requires_action':

If the run status is requires_action, those actions must be taken before the run status will reach the completed state. Not shown in your example is the code that gets the tool_calls information and actually does the work.

tool_calls = run.required_action.submit_tool_outputs.tool_calls

Once the tools have been called and the output collected, it needs to be sent back to the LLM, which is:

run = await client.beta.threads.runs.submit_tool_outputs_and_poll

This does three things:

  1. It submits the tool output back to the LLM
  2. It waits for this to be completed before moving on.
  3. It updates the run variable, which is important, because the run state will have changed. (Otherwise, if run.status == "completed": would fail when there are tool calls because the status held in the run variable would stay as required_action.)

Finally, if run.status == "completed": is checked, and if everything went well, we’re able to collect the response.

Very nice!

Hi @viraj.bhatt,

I want to preface this with — I might be wrong! But I did a little bit of research and think I found an answer for you. Try using the following arguments:

Untested code:

run = client.beta.threads.runs.create_and_poll(
    thread_id=thread.id,
    assistant_id=assistant.id,
    poll_interval_ms=5000,
    timeout=60.0
)

This is based on…

As well as…

Well I could be wrong but by the looks of it, seems like the timeout parameter is being passed to the retrieve call, so that should ideally timeout within X secs. It still does not fiddle with the ‘While’ loop though. What I am wanting to do is break out of the while loop if let’s say 120 secs have been marked from the start of the poll.

https://github.com/openai/openai-python/blob/e9724398d2bdd87ba41f199c3577303f1b80f2c7/src/openai/resources/beta/threads/runs/runs.py#L1062

Ah, bummer. I’m looking through the source and I don’t see anything that would really streamline this for you. Hopefully someone more knowledgeable can chime in?

Hi @clone45, you are right, just let me fullfil the rest of code, making this more clear.

# Creates the message
await client.beta.threads.messages.create(thread_id=thread_id,
                                              role="user",
                                              content=user_message)

# running create_and_poll...
run = await client.beta.threads.runs.create_and_poll(
        thread_id=thread_id, assistant_id=assistant_id, poll_interval_ms=2000)


# RUN STATUS: REQUIRES ACTION
if run.status == 'requires_action':
  
  # Handle the function call
  for tool_call in run.required_action.submit_tool_outputs.tool_calls:
    if tool_call.function.name == "12345": # name of your function call
      
      # Arguments returned by llm
      rawArguments = tool_call.function.arguments

      # response from function calling
      response_message = ""
      try:
        arguments = json.loads(rawArguments)

        output = await function_to_be_called(arguments["firstname"],arguments['phone'])
        response_message = output["response"]

      except json.JSONDecodeError as e:
        response_message = "JSONDecodeError: " + str(e), 400
      except KeyError as e:
        response_message = "Missing required argument: {e}", 400
      finally:

        # Send the response back to the function calling tool
        run = await client.beta.threads.runs.submit_tool_outputs(
            thread_id=run.thread_id,
            run_id=run.id,
            tool_outputs=[{
                "tool_call_id": tool_call.id,
                "output": response_message #pass the response from your function to openai, so it knows if everything worked fine, or happens with me a lot, some arguments was invalid or filled with a placeholder.
            }])


# RUN STATUS: COMPLETED
if run.status == "completed":
  response_message = await get_response(run.thread_id)
  return {"response": response_message, "status_code": 200}

# RUN STATUS: EXPIRED | FAILED | CANCELLED | INCOMPLETE
if run.status in ['expired','failed','cancelled','incomplete']:
  return {"response": run.last_error, "status_code": 500}
  1. You’re right, if run.status == ‘required_action’, you go through tool_calls and do the action (updated code for more clearness).
  2. After calling your “function_name”, you need to pass the output back to LLM, so it knows that everything worked OK or if there was any information missing, error, etc.
  3. Right after submitting the output back, the run goes back to ‘in progress’ state (https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle).status
    So, that’s the reason the submit_tool_outputs updates the RUN variable to get the next final state, which hopefully gets to “completed”
3 Likes