Assistant API not engaging conversation

Assistant APi is currently given instruction to send 3 parameters as function call. Out of these 3 params, 2 are mandatory and if the user have not given those in first promot, API will ask user for further question once and then pass the info back. Issue i am facing is when the assistant has to follow up. function is not calling back. it works fine if all info givenin first go

Code:

query_content_global = “”
query_top_global = “”
additional_param_global = “”

def get_current_query_content(query_content, query_top, additional_param):
global query_content_global, query_top_global, additional_param_global
query_content_global = query_content
query_top_global = query_top
additional_param_global = additional_param
return “Your request has been received and is being processed.”

from openai import OpenAI
client = OpenAI()

info = (
“The input which you receive from the user has to be divided into 3 different information criteria:\n\n”
“1. Article Top Query- You shall interpret the input given by the user and see whether the query requires looking into any specific customer or any specific industry vertical. Intent is to get the filter to be applied on the query from where the data has to be taken.\n\n”
“2. Article Content Query- You will create a text of what is to be searched from the query which will be used for processing later via doing similarity search on the embedding.\n\n”
“3. Additional filter- Any additional filter to be applied on query asked like timelines.\n\n”
“Article Top Query and Article Content Query are mandatory and if you are unable to identify these two parameter in user query, ask them again to get these information out.\n\n”
“Get final confirmation by saying 'Does this cover everything you need, or is there any other detail or specific timeframe you’d like to include in the analysis.\n\n”
“Once confirmation received, use the function instruction and pass the parameter collected as per programming requirement and in the frontend create the output saying your request will be processed soon.Use the function instruction and pass the parameter collected as per programming requirement and in the frontend create the output saying your request will be processed soon.”
)

assistant = client.beta.assistants.create(
instructions=info,
model=“gpt-4-turbo”,
tools=[
{
“type”: “function”,
“function”: {
“name”: “get_current_query_content”,
“description”: “Get the current query content and additional details for specific user input”,
“parameters”: {
“type”: “object”,
“properties”: {
“query_content”: {
“type”: “string”,
“description”: “The content inquired by the user”
},
“query_top”: {
“type”: “string”,
“description”: “Topic related to the query”
},
“additional_param”: {
“type”: “string”,
“description”: “Additional parameter related to the query”
}
},
“required”: [“query_content”, “query_top”, “additional_param”]
}
}
}
]
)

Step 2: Create a Thread

thread = client.beta.threads.create()

Step 3: Add a Message to a Thread

message = client.beta.threads.messages.create(
thread_id=thread.id,
role=“user”,
content=“Can you please give me right to audit clause from Aetna”
)

Step 4: Run the Assistant

run = client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant.id
)

print(run.model_dump_json(indent=4))

while True:
# Wait for 5 seconds
time.sleep(5)

# Retrieve the run status
run_status = client.beta.threads.runs.retrieve(
    thread_id=thread.id,
    run_id=run.id
)
print(run_status.model_dump_json(indent=4))

# If run is completed, get messages
if run_status.status == 'completed':
    messages = client.beta.threads.messages.list(
        thread_id=thread.id
    )

    # Loop through messages and print content based on role
    for msg in messages.data:
        role = msg.role
        content = msg.content[0].text.value
        print(f"{role.capitalize()}: {content}")

    break  # This break ensures that once the run is completed, the while loop exits.
elif run_status.status == 'requires_action':
    print("Function Calling")
    required_actions = run_status.required_action.submit_tool_outputs.model_dump()
    print(required_actions)
    tool_outputs = []
    import json
    for action in required_actions["tool_calls"]:
        func_name = action['function']['name']
        arguments = json.loads(action['function']['arguments'])
        
        if func_name == "get_current_query_content":
            # Extract all required parameters from the arguments JSON
            query_content = arguments['query_content']
            query_top = arguments['query_top']
            additional_param = arguments['additional_param']

            # Call the function with all required parameters
            output = get_current_query_content(query_content=query_content, query_top=query_top, additional_param=additional_param)
            tool_outputs.append({
                "tool_call_id": action['id'],
                "output": output
            })
        else:
            raise ValueError(f"Unknown function: {func_name}")
        
    print("Submitting outputs back to the Assistant...")
    client.beta.threads.runs.submit_tool_outputs(
        thread_id=thread.id,
        run_id=run.id,
        tool_outputs=tool_outputs
    )
else:
    print("Waiting for the Assistant to process...")
    time.sleep(5)  # This time.sleep(5) is correctly placed to delay the next iteration of the loop.

Quite frankly, I have no idea what this function is supposed to do, what inputs it expects, what it would return, and what use it would be in the course of a chatbot conversation.

Nor would the AI have any idea.

It is used to get the user to input a statement out from the user’s request:

What is the topic of inquiry,

Who is the topic of inquiry and

Any additional filters.

i have a seaparate function that updates these function vairables into global variables and this i am then using on my vector database to do filtering and searching to get information out. Hope this makes sense.

Issue is if user gives all info in one go, function is called however it conversation has to be extended

Consider how an AI would use a function automatically when it finds that a function would be useful for satisfying a user’s request:

Perform an action:

user: Post a tweet using my account that says “AI is smart”
assistant (to function): tweet_function({“method”: “post”, “tweet_text”: “AI is smart”})
function: tweet_function(“Success! The tweet has been posted”)
assistant: Your tweet was sent and should be visible to everybody!

Enhance knowledge:

user: What is Sam Altman talking about today?
assistant (to function): internet_search({“search_query”: “Sam Altman OpenAI news”})
function: internet_search("results 1: [x.com] “Sam Altman likes gpt2”, results 2: [yahoo.com]…)
assistant: Sam Altman is interested in gpt2, an AI model from 2019, or possibly a new AI

Enhance skill:

user: I shoot my gun at black bill
assistant (to function): random_choice([“hit”, “graze”, “miss”, “accident”]
function: random_choice(“accident”)
assistant: The gun misfires, explodes, and blows your hand clean off!

You can see that all of these are driven by a need to fulfill user input, not to compel the user to input something. What you propose doesn’t seem to fit this pattern. It also doesn’t tell us why the function exists or what it does. Adequate description is necessary.

A purpose-driven function would have a description and name:

knowledgebase_search:
description: You have a source of new information about XYX company that provides knowledge you can’t answer yourself. By making a query, you will receive the top search results. You must query the knowledgebase for any user interaction that requires more information about XYX. If you anticipate some irrelevant results, you can also remove query results by supplying a word that demotes those results to focus the search.
Properties:

This gives a well-described function that can amend the knowledge of the AI.

Or I could be mistaken, your information is vague enough you could even want a box to pop up and ask the user some questions.

My issue is- if i tell assistant to double check with user before doung function call, it works fine in playground but when i execute in jupyter notebook, it is not letting the user input the follow up response

Were you able to solve this ? Happy to look into it.

no sir still not working… when i tell Assistant to not ask user for confirmation or ask additional question, in short engage in conversation… then the program works fine… but if i tell assistant to ask user for confirmation before proceeding… i am not getting option to input second time:

def process_openai_assistant(user_input):
logging.debug(“Processing with OpenAI Assistant”)

# Set up the instructions and tools
info = """
The input which you receive from the user has to be divided into 3 different information criteria:
1. Subject - Subject of the requirement
2. Entity - Entity of the requirement. If not present, call it General
3. Query Type - What is the type of query asked. Categorize it either "general" or "full analysis" or "compare"
Do not ask user for confirmation and use the function instruction and pass the parameter collected as per programming requirement and in the frontend create the output saying your request will be processed soon.
"""

assistant = client.beta.assistants.create(
    instructions=info,
    model="gpt-4-turbo",
    tools=[{
        "type": "function",
        "function": {
            "name": "get_current_query_content",
            "description": "Stores the content, query type and topic from the user input globally",
            "parameters": {
                "type": "object",
                "properties": {
                    "query_content": {"type": "string", "description": "The content inquired by the user"},
                    "query_top": {"type": "string", "description": "Topic related to the query"},
                    "analysis_type": {"type": "string", "description": "Type of query asked"}
                },
                "required": ["query_content", "query_top", "analysis_type"]
            }
        }
    }]
)

thread = client.beta.threads.create()
message = client.beta.threads.messages.create(thread_id=thread.id, role="user", content=user_input)
run = client.beta.threads.runs.create(thread_id=thread.id, assistant_id=assistant.id)

# Implementing a timeout mechanism to avoid infinite loops
timeout_limit = 120  # timeout limit in seconds (e.g., 2 minutes)
start_time = time.time()

while True:
    run_status = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
    
    if run_status.status == 'requires_action':
        
        required_actions = run_status.required_action.submit_tool_outputs.model_dump()
        tool_outputs = []

        for action in required_actions["tool_calls"]:
            func_name = action['function']['name']
            arguments = json.loads(action['function']['arguments'])

            if func_name == "get_current_query_content":
                query_content = arguments['query_content']
                query_top = arguments['query_top']
                analysis_type = arguments['analysis_type']

                # Call the function with all required parameters
                output = get_current_query_content(query_content=query_content, query_top=query_top, analysis_type=analysis_type)
                tool_outputs.append({
                    "tool_call_id": action['id'],
                    "output": output
                })

        # Check if global variables have been updated
        if query_content_global != "" and query_top_global != "" and analysis_type_global !="":
            logging.debug("Global variables have been updated.")
            break

    if time.time() - start_time > timeout_limit:
        logging.error("Timeout reached without completing processing.")
        break

    time.sleep(5)  # Sleep to throttle the loop and avoid hitting rate limits

logging.debug(f"Final globals: query_content_global={query_content_global}, query_top_global={query_top_global}, analysis_type_global={analysis_type_global}")

i am using flask to render html page to input query first time… and the above program works but if i add instruction to assistant API to ask for confirmation or if it needs more clarity, thread goes into “Queued” but i am not getting option to input or add message to original thread