I am looking for a way to add a loading gif when the API is running a function but is yet to finish it. The status_processing is not ideal since it does not indicate that a function has been called. Any idea on how to do it?
I check if the Tool_Call has data. I trigger my loader than and toggle it off when I successfully run the Submit tool output.
Welcome @rstefanto
The API doesn’t execute a function. It’s executed on the developer side so you have full control and knowledge about the status of function execution.
Adding a loading GIF to indicate that an API call or function is in progress but hasn’t yet completed can provide helpful feedback to users. Here’s a way to accomplish this without relying on status_processing
.
1. Use a Loading State in the Frontend
First, manage the loading GIF visibility using a loading
state. This state will turn on the loading GIF when the API function is called and turn it off once the response is returned.
2. Implementation Steps
Frontend (HTML + JavaScript/React/Vue):
- Create a
loading
boolean state that controls the visibility of the GIF. - When the function is called, set
loading
totrue
to show the loading GIF. - Once the API returns a response (or encounters an error), set
loading
back tofalse
to hide the GIF.
Here’s an example approach in JavaScript and a frontend framework like Vue or React.
Example in Vue:
<template>
<div>
<!-- Loading GIF element -->
<img v-if="loading" src="loading.gif" alt="Loading..." />
<!-- The button or element that triggers the API call -->
<button @click="callApi">Start API Call</button>
<!-- Display API Response -->
<div v-if="apiResponse">{{ apiResponse }}</div>
</div>
</template>
<script>
export default {
data() {
return {
loading: false,
apiResponse: null
};
},
methods: {
async callApi() {
// Set loading to true when the API call begins
this.loading = true;
try {
// Replace with your API call logic
const response = await fetch('/api/some-endpoint');
this.apiResponse = await response.json();
} catch (error) {
console.error('API call failed:', error);
} finally {
// Set loading to false once the API call completes
this.loading = false;
}
}
}
};
</script>
Example in React:
import React, { useState } from ‘react’;
function ApiCaller() {
const [loading, setLoading] = useState(false);
const [apiResponse, setApiResponse] = useState(null);
const callApi = async () => {
setLoading(true); // Show loading GIF
try {
const response = await fetch('/api/some-endpoint');
const data = await response.json();
setApiResponse(data);
} catch (error) {
console.error('API call failed:', error);
} finally {
setLoading(false); // Hide loading GIF
}
};
return (
{loading && }
Start API Call
{apiResponse &&
);
}
export default ApiCaller;
Explanation of Code
loading
state: Controls the visibility of the loading GIF.callApi
function: Updates theloading
state when the API call is made and upon completion or failure. This preventsstatus_processing
from being a factor and gives you full control over the loading indicator.
This approach will help ensure that the loading GIF appears as soon as the function starts and disappears as soon as it ends, regardless of how long the function takes.
Ignore the useless bot spammer above.
It is important to read “Assistant” in the title to understand what it means when you say “the API is running a function” – that the assistants endpoint is being used, and you have OpenAI’s functions like python
, file_search
that can do their own thing without polling you, besides a requres_action
status that must invoke your own tool.
You can somewhat monitor the internal progress of a run by retrieving run steps.
This is a list of internal calls.
https://platform.openai.com/docs/api-reference/run-steps/step-object
{
“id”: “step_abc123”,
“object”: “thread.run.step”,
“created_at”: 1699063291,
“run_id”: “run_abc123”,
“assistant_id”: “asst_abc123”,
“thread_id”: “thread_abc123”,
“type”: “message_creation”,
“status”: “completed”,
“cancelled_at”: null,
“completed_at”: 1699063291,
“expired_at”: null,
“failed_at”: null,
“last_error”: null,
“step_details”: {
“type”: “message_creation”,
“message_creation”: {
“message_id”: “msg_abc123”
}
},
“usage”: {
“prompt_tokens”: 123,
“completion_tokens”: 456,
“total_tokens”: 579
}
}
This is as close to a “progress report” as you can poll for. You can also get events from a streaming response, that can give an indication of internal tools having been emitted. To start with, type
can be either message_creation
or tool_calls
.
https://platform.openai.com/docs/api-reference/assistants-streaming/events
Or a simpler trick: ANY time waiting before tokens of response output are received gets a “thinking” indicator.
On chat completions where all tools are yours, you know the second that a delta chunk has a first “tool_call” object with contents that any further response will not be to a user, and thus “thinking” UI can be displayed.
Thanks! Was not aware of this run steps api.!