Multiple reposts of it being slow (even with gpt4o). Assistants keep running without stopping and generate multiple responses for each input. Today, ChatGPT is down.
I wonder what is happening at OpenAI?
Multiple reposts of it being slow (even with gpt4o). Assistants keep running without stopping and generate multiple responses for each input. Today, ChatGPT is down.
I wonder what is happening at OpenAI?
Assistants has never been “fast”. It takes multiple API calls to create an assistant, thread, upload files, create vector stores with embeddings, attach file IDs for unique message vector stores, set up code interpreter sessions. To then have the AI internally buzz and whir with internal tools, correcting its errors that it might have internally produced, before starting to generate a response to you, in its initial version, not even producing an “answer ready” status until the response was done, that you then had to keep checking on.
What is generated is a combination of model capabilities and the developer’s instructions, which often are quite misguided in understanding what is optimal for the AI.
A ChatGPT customer database outage is unrelated and did not affect API. They do not reveal what went wrong in any of their “A fix has been implemented”.
I am having problem with prompts, it always get the data wrong, even if the temperature is set to zero. Example, it always get the wrong image, besides I explaining how to do it on the prompt.
Can you share the prompt?
Assistants seems to have a problem with temperature 0. Try leaving that alone, and instead use top_p: 0.1
if you want a constrained predictable output.
Then you should make a “what’s in this image” with image by “vision” file ID attachment in the user message content, along with the user message where you are giving the vision task, and provide a very unlikely image like a polar bear to ensure it sees anything.
One of my non-public custom Gpts is going haywire. It is based on dall-e. Compared to a few hours ago he seems not to understand the instructions and makes drawings of people as sticks.
ChatGPT has been unaccessible on and off all day today (06-04-24). I hope it is fixed soon as I’m a paying subscriber. One good thing to come of it is the realization of how dependent I’ve become on it - - which is a bit troubling.
I tried to fool DALL-E 3 itself, no problems there.
“An intricate pencil drawing of some people who are collecting sticks in the woods.”
You can go back to the response right after the ChatGPT image generation (if you continued chatting), edit, and ask for a verbatim reproduction of the JSON message sent to dalle, to see how the AI is misinterpreting your GPT instructions. Have it send the exact same thing again.
To get you back, ChatGPT decided to mess with the aspect ratio on ya haha
Everything is back to normal. Thanks anyway.
To explain myself better, I show you the drawings that were shown to me after making the request “old zen master”
That is the AI writing Python and sending to code interpreter. It’s a very nice drawing in that respect.
Yeah to bad they dumbing down the model in the interest of “safety”. Now they limiting customers that pay for it! Im canceling my sub. Of course they will hide this post as well cause not good for business if people know the truth
I started off using the assistants API, but a combination of sluggish and performance and unreliable behaviour forced me to abandon it in favour of the regular completions API. That means I have to store and truncate the chat context myself, but the benefits outweigh that extra effort.
I will probably do that too.