Chat completions function calling help

Hey everyone, having trouble with function calling using chat completions API.
I can successfully chat with the model in my terminal window but I cannot seem to execute the function and return the result back to the model, I feel like I’m close.

Is anyone able to point me in the right direction?

import json
from openai import OpenAI
from tenacity import retry, wait_random_exponential, stop_after_attempt
from termcolor import colored  
from tools import tools
from execute import execute_python_code

GPT_MODEL = "gpt-3.5-turbo-0613"
client = OpenAI()

@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None, tool_choice=None, model=GPT_MODEL):
    try:
        response = client.chat.completions.create(
            model=model,
            messages=messages,
            tools=tools,
            tool_choice=tool_choice,
            #stream=True
        )
        return response
    except Exception as e:
        print("Unable to generate ChatCompletion response")
        print(f"Exception: {e}")
        return e

messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})

while True:
    q = input(">>>")
    messages.append({"role": "user", "content": q})
    chat_response = chat_completion_request(
        messages, 
        tools=tools
    )
    assistant_message = chat_response.choices[0].message
    messages.append(assistant_message)
    print(assistant_message)

    if assistant_message.tool_calls:
        tool_calls = assistant_message.tool_calls

        for tool_call in tool_calls:
            function = tool_call.function
            arguments = function.arguments
            output = execute_python_code(arguments)
            print(arguments)

And here is the function that should be called:

import sys, io, traceback
def execute_python_code(code_str):
    # Redirect standard output
    old_stdout = sys.stdout
    redirected_output = io.StringIO()
    sys.stdout = redirected_output

    try:
        # Execute the code
        exec(code_str)
    except Exception:
        # Capture any error messages
        error_traceback = traceback.format_exc()
        redirected_output.write(error_traceback)
    finally:
        # Restore the original standard output
        sys.stdout = old_stdout

    # Get the captured output
    return redirected_output.getvalue()

Any help is greatly appreciated.

Uh, don’t do that.

You’re gambling the AI doesn’t do something like:

# create temporary file
# ... your code goes here

# clean up temporary file
rm -rf /*

I don’t see any code that attempts to inform the AI of the tool it emitted or the return value. You’ll get a continuous loop of the user asking more questions and the assistant never responding, potentially having the AI view them all as a big question or being mistrained on the chat history.

Here’s a linear script where I show how to send a tool call. Then two more code blocks that add what the assistant had output and the return value to the end of your chat history, to then obtain the response.

Then start with a safer function, like “divide a number”.

1 Like