I asked GPT-4 to review and assist me in editing a document. After a lengthy wait, this was the response I received: “I aim to provide a comprehensive update on the progress within a few days.” Is this normal?
Yes, it is normal to give GPT-4 several days for it to formulate an answer.
The AI writes probable text, it doesn’t necessarily know it is an AI or answer coherently. I’ve seen the AI need to go get a cup of coffee first.
Was it within assistants? The AI may have enough iterations and looping feedback to think that it is taking a long time and write the user a message (not knowing that it can’t then resume).
No, that’s not normal. I’ve never seen ChatGPT (with the GPT-4 model or otherwise) refer to any expected wait time.
It’s not normal but it’s not altogether unexpected either.
The models are trained on human responses. What we are seeing in this behaviour is the evidence of such responses in the training data.
While rare, it is in there, so when enough people run enough prompts through the model, inevitably this type of response is certain to be generated.
The only thing that can be done is to accept it as a very rare occurrence, regenerate the response and move on, frustrating though it may be.
Alternately, one could add something to the prompt or system message stating that the solution is needed urgently, is high priority, or must be done immediately and it will likely be enough to render this type of response so unlikely that it is effectively suppressed, but I’m confident these responses are rare enough it isn’t necessary to devote resources to combating it.
LOL. These always make me laugh way too hard.
I think this is called “being ghosted by AI”