Help using OpenAI's Assistant's function calling into a python code

Hello! I need help using OpenAI’s assistants. I have an existing assistant with tools like function calls, code interpreter, and retrieval. I have a function called get_current_weather in the OpenAI assistant, and the code looks like this:

{
  "name": "get_current_weather",
  "description": "Get the current weather in a given location",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "The city and country code, e.g. Defaults to some_location, US if not specified."
      },
      "unit": {
        "type": "string",
        "enum": [
          "celsius",
          "fahrenheit"
        ],
        "default": "fahrenheit"
      }
    },
    "required": []
  }
}

I was wondering how I can use my OpenAI assistant’s function from the assistants API into my Python code:

def get_current_weather(city="some_location", country_code="us"):
    base_url = "the url"
    response = requests.get(complete_url)
    weather_data = response.json()
    if weather_data['cod'] == 200:
        main = weather_data['main']
        temperature = main['temp']
        humidity = main['humidity']
        clouds = weather_data['clouds']['all']
        weather_description = weather_data['weather'][0]['description']
        chance_of_rain = weather_data.get('rain', {}).get('1h', 0)
        weather_report = [
            f"Temperature: {temperature}°F",
            f"Humidity: {humidity}%",
            f"Cloudiness: {clouds}%",
            f"Description: {weather_description}",
            f"Chance of Rain (next hour): {chance_of_rain}%"
        ]
        
        return "\n".join([f"- {item}" for item in weather_report])
    else:
        return "Weather data not found."

I can use it only when I have to do this and not use my assistant:

    response = client.chat.completions.create(
        model="gpt-3.5-turbo-0125",
        messages=messages,
        tools=tools,
        tool_choice="auto",

and I also have to put the functions in the Python code instead of putting the functions in an existing OpenAI Assistant

Check the Assistant’s API Overview in the cookbook site for in depth explanation.

In gist, you will be using your get_current_weather handler when the run status becomes requires_action and your function is invoked.

I followed the instructions and made it work… kind of. When I ask something that requires a function, like “What’s the weather like?”, it works. But whenever I input “hi” or “what is the circumference of the earth,” it has this error:

Traceback (most recent call last):
  File "C:\Users\Harry\Documents\Code\HAVID\GPT\GPTV2_8.py", line 93, in <module>
    tool_call = run.required_action.submit_tool_outputs.tool_calls[0]
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'submit_tool_outputs'

So I’m just a little lost.

Here is the full code:

from openai import OpenAI  
import time 
import requests 
import json 

api_key = "API_key"
openweathermap_api_key = "Another_API_Key"
client = OpenAI(api_key=api_key)

ASSISTANT_ID = "asst_something"

def get_current_weather(city="Some_location", country_code="us"):
    base_url = "the_URL"
    complete_url = f"{base_url}appid={openweathermap_api_key}&q={city},{country_code}&units=imperial"
    response = requests.get(complete_url)
    weather_data = response.json()
    if weather_data['cod'] == 200:
        main = weather_data['main']
        temperature = main['temp']
        humidity = main['humidity']
        clouds = weather_data['clouds']['all']
        weather_description = weather_data['weather'][0]['description']
        chance_of_rain = weather_data.get('rain', {}).get('1h', 0)
        weather_report = [
            f"Temperature: {temperature}°F",
            f"Humidity: {humidity}%",
            f"Cloudiness: {clouds}%",
            f"Description: {weather_description}",
            f"Chance of Rain (next hour): {chance_of_rain}%"
        ]
        
        return "\n".join([f"- {item}" for item in weather_report])
    else:
        return "Weather data not found."
    
function_json = {
    "name": "get_current_weather",
    "description": "Get the current weather in a given location",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and country code, e.g., some_location,us. Defaults to some_location, US if not specified.",
                "default": "some_location,us"
            },
            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"], "default": "fahrenheit"},
        },
        "required": [], 
    }, 
}

assistant = client.beta.assistants.update(
    ASSISTANT_ID,
    tools=[
        {"type": "code_interpreter"},
        {"type": "retrieval"},
        {"type": "function", "function": function_json},
    ],
)

def submit_message(assistant_id, thread, user_message):
    client.beta.threads.messages.create(
        thread_id=thread.id, role="user", content=user_message
    )
    return client.beta.threads.runs.create(
        thread_id=thread.id,
        assistant_id=assistant_id,
    )

def get_response(thread):
    return client.beta.threads.messages.list(thread_id=thread.id, order="asc")

def create_thread_and_run(user_input):
    thread = client.beta.threads.create()
    run = submit_message(ASSISTANT_ID, thread, user_input)
    return thread, run

def wait_on_run(run, thread):
    while run.status == "queued" or run.status == "in_progress":
        run = client.beta.threads.runs.retrieve(
            thread_id=thread.id,
            run_id=run.id,
        )
        time.sleep(0.5)
    return run

thread, run = create_thread_and_run(
    "what is the circumference of the earth"
) 
run = wait_on_run(run, thread)

tool_call = run.required_action.submit_tool_outputs.tool_calls[0]
name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments) 

responses = get_current_weather() 

run = client.beta.threads.runs.submit_tool_outputs(
    thread_id=thread.id,
    run_id=run.id,
    tool_outputs=[
        {
            "tool_call_id": tool_call.id,
            "output": json.dumps(responses),
        }
    ],
)

run = wait_on_run(run, thread)
print(run)

I am not a python guy so my understanding of your code might be not good but it seems that you always assume that your tool is invoked.

run = wait_on_run(run, thread)

# this part
tool_call = run.required_action.submit_tool_outputs.tool_calls[0]
name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments) 
...

Did you find the solution ?

This is an example of using stream mode of Assistant API: npi/examples/openai/assistant.py at dev · npi-ai/npi · GitHub

Hope helpful.