We can’t have enough topics from this! I’m currently not using it for something that critical but it is very annoying and already two days indeed. I contacted OpenAI three times. No decent answers to work with. I asked my bot what to and it give me an answer that the people at OpenAI could use probably and their support will see the irony a gpt generated advise:
CPU Usage: Check if the CPU is being heavily utilized.
Memory Usage: See if there’s enough RAM available.
Disk I/O: Determine if there’s high disk usage or slow read/write speeds.
Network Latency: If the AI agents are network-dependent, check for network issues.
Running Processes: Identify if there are any processes consuming excessive resources.
My application is basically down because of this and unusuable, but downtime isn’t even registered on OpenAI’s status page. Issues like this make me consider migrating off of the Assistant API.
I thought using o1 because the accuracy of gpt 4 was not good enough even for simple prompts. It has a vector database attached with the information of a list of Yachts.
It should answer back with the ID of the yacth. But it fails even when asked simple things like: Show me boats longer than 50 in lenght
With Gpt 4o the response time is 48 secs for a simple prompt like that and it gets it wrong
Which approach should i use for this type of use case?
I had better results when I was passing the yacht list as the API response from my database. But in that moment there weren’t so many fields in the table and neither lot of data
As an alternative, Im exploring to use a text generation for creating the actual filters of my API Call that returns the yacht list
This was my fault
I made a new JSON object containing only the fields that AI could use to search and now the response time is around 7 secs but quite accurate
Sorry for the issues — we’ve made some adjustments here (last Tues night/Weds morning) to speed things up and it looks like responses are faster now, at least from our end. Are things better for you?
The 1 minute response times are gone since a few days back, but we still have frequent 15sec+ answers, with the exact same set-up we achieved 6-8sec answers about 1 month ago. So this should still be classed as a CRITICAL OUTAGE
Do you know anything related to attachments in a message to Assistant API? a week ago the response times I had when attaching a PDF document to a message and requesting a function calling was around 10-30 seconds. Currently, this is taking over a minute and a half. Has there been any change in this regard?