Assistant API - Function Calling

Hi there,

I am facing the issue in which assistant api call the function even if the output has been provided by function. Has anyone faced this issue and resolved it?

here is my python code:

    async def assistants_api_call(self, thread_id, user_prompt, user:User, timeout_seconds=30):
        """
        Handle assistant API calls with timeout and required actions.
        """
        try:
            # Ensure only one run is active at a time
            await self.cancel_all_runs(thread_id)

            # Adding the user's message to the thread before starting the run
            await self.add_message_to_thread(thread_id=thread_id, role='user', content=user_prompt)

            # Create a run and wait for its completion or timeout
            run = await self.client.beta.threads.runs.create(thread_id=thread_id, assistant_id=self.assistant_id)
            
            await asyncio.wait_for(self.wait_for_run_completion(thread_id, run, user), timeout_seconds)

        except asyncio.TimeoutError:
            logger.error("Assistant run exceeded timeout limit")
            await self.cancel_run(thread_id, run)
            raise Exception("Assistant run timed out")
        
        if run.status == 'failed':
            raise Exception("Assistant run failed")
        
        return await self.get_latest_message(thread_id)

    async def wait_for_run_completion(self, thread_id, run, user:User):
        """
        Waits for a run to complete.
        
        Args:
            thread_id (str): The ID of the thread.
            run (Run): The run object.
            
        Raises:
            Exception: If the API rate limit is exceeded.
            
        Returns: 
            None
        """
        while run.status not in ['completed', 'failed']:
            logger.info(f"Run {run.id} status: {run.status}")
            if 'usage' in run and 'total_tokens' in run['usage'] and run['usage']['total_tokens'] >= self.max_output_tokens:
                    raise Exception("API rate limit exceeded")
            if (run.status == 'requires_action') and run.required_action.type == 'submit_tool_outputs':
                
                tool_outputs = await self.handle_required_actions(required_action=run.required_action, thread_id=thread_id, run_id=run.id, user=user)
                await self.client.beta.threads.runs.submit_tool_outputs(thread_id=thread_id, run_id=run.id, tool_outputs=tool_outputs)
                logger.info(f"Submitted tool outputs for run {run.id}")
                
            await asyncio.sleep(1) # Adding a delay to avoid spamming the API
            run = await self.client.beta.threads.runs.retrieve(thread_id=thread_id, run_id=run.id)

Runtime Logs (sensitive information has been removed):


INFO:httpx:HTTP Request: POST https://api.openai.com/v1/threads/<thread_id>/runs "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: queued
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: requires_action
INFO:app_file:Handling required action: [RequiredActionFunctionToolCall(id='<call_id_1>', function=Function(arguments='{\n  "query": "funny",\n  "type": "gif",\n  "num_results": 5\n}', name='fetch_gif_sticker'), type='function')]
INFO:app_file:Calling function fetch_gif_sticker with args: {
  "query": "funny",
  "type": "gif",
  "num_results": 5
}
INFO:app_file:Calling function fetch_gif_sticker with args: {
  "query": "funny",
  "type": "gif",
  "num_results": 5
}
INFO:app_file:Fetching gif from Giphy for query: funny
INFO:app_file:Tool outputs: [{'tool_call_id': '<call_id_1>', 'output': 'https://media3.giphy.com/media/yCr6bHSVvNOtG7DRBJ/giphy.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy.gif&ct=g, https://media4.giphy.com/media/bCTbZchgFKieymYPuB/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media2.giphy.com/media/BeQUzZdJkKZ91lPpm9/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media3.giphy.com/media/R6gvnAxj2ISzJdbA63/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media3.giphy.com/media/n8x2aZCaCXAQEW6MKR/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g'}]
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/threads/<thread_id>/runs/<run_id>/submit_tool_outputs "HTTP/1.1 200 OK"
INFO:app_file:Submitted tool outputs for run <run_id>
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: requires_action
INFO:app_file:Handling required action: [RequiredActionFunctionToolCall(id='<call_id_2>', function=Function(arguments='{\n  "query": "funny",\n  "type": "gif",\n  "num_results": 5\n}', name='fetch_gif_sticker'), type='function')]
INFO:app_file:Calling function fetch_gif_sticker with args: {
  "query": "funny",
  "type": "gif",
  "num_results": 5
}
INFO:app_file:Calling function fetch_gif_sticker with args: {
  "query": "funny",
  "type": "gif",
  "num_results": 5
}
INFO:app_file:Fetching gif from Giphy for query: funny
INFO:app_file:Tool outputs: [{'tool_call_id': '<call_id_2>', 'output': 'https://media3.giphy.com/media/yCr6bHSVvNOtG7DRBJ/giphy.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy.gif&ct=g, https://media4.giphy.com/media/bCTbZchgFKieymYPuB/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media2.giphy.com/media/BeQUzZdJkKZ91lPpm9/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media3.giphy.com/media/R6gvnAxj2ISzJdbA63/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g, https://media3.giphy.com/media/n8x2aZCaCXAQEW6MKR/giphy-downsized.gif?cid=c302413ao8ut2sayzu7owjavr4oco5cw330cjpu6g9w1wbad&ep=v1_gifs_trending&rid=giphy-downsized.gif&ct=g'}]
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/threads/<thread_id>/runs/<run_id>/submit_tool_outputs "HTTP/1.1 200 OK"
INFO:app_file:Submitted tool outputs for run <run_id>
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs/<run_id> "HTTP/1.1 200 OK"
INFO:app_file:Run <run_id> status: in_progress
ERROR:app_file:Assistant run exceeded timeout limit
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/threads/<thread_id>/runs/<run_id>/cancel "HTTP/1.1 200 OK"
INFO:app_file:Cancelled run <run_id>
WARNING:root:Falling back to Chat Completions API due to: Assistant run timed out
INFO:httpx:HTTP Request: GET https://api.openai.com/v1/threads/<thread_id>/runs "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/threads/<thread_id>/runs/<run_id>/cancel "HTTP/1.1 400 Bad Request"
ERROR:root:Error cancelling all runs: Error code: 400 - {'error': {'message': "Cannot cancel run with status 'cancelled'.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"