Frequent Error Message in Playground

Hi there,

I’m reaching out for help with an issue I’ve been experiencing. I’m testing GPT-5-chat-latest, but it frequently shows the following error message:

“An error occurred. Either the engine you requested does not exist or there was another issue processing your request. If this issue persists please contact us through our help center at https://help.openai.com.”

Does anyone know what might be causing this? Many thanks in advance!

Seems on point for me. Except for not believing that gpt-5 exists from pretraining.

The use of the word “engine” to me means that it is a problem with the Responses endpoint - not making the connection with the model. That’s an internal and obsolete word for “model”, seen in other problems with the Responses endpoint routing.

Since gpt-5-chat-latest is not a reasoning model, there is no reason to use it on “Responses”. Try the switch to chat completions (under the kebab menu).

Additionally, it takes sampling parameters. Constrain the language with “top_p”: 0.5 or below, so that it doesn’t emit tokens that simply break the API backend.

This model has a bunch of features disabled, and also cannot take large input or accept max_tokens over 16k, as it is the non-thinking gpt-5 in ChatGPT, but offered for experimentation on the API, not for you to be able to truly develop products with. The playground has previously provisioned the model wrong against these limitations but seems to at least be able to be a boring version of “Marv the sarcastic chatbot” without API error.

Hi and welcome to the community!

At first I was thinking the model parameter might be case sensitive, I honestly don’t recall trying it, because then the solution could be

gpt-5-chat-latest

But then I realized you wrote:

Do I understand correct that it works sometimes and then the error message is returned some other times?

Hi there, thank you for helping me look into this! I’m using the Chat Completions endpoint (not the Responses model).

From what I can tell, the model seems to have a smaller context window — the files suggest a ‘128,000 context window’. I also noticed the note that ‘We recommend GPT-5 for most API usage, but feel free to use this GPT-5 Chat model to test our latest improvements for chat use cases.’

But frankly compared to other models I’ve tried, gpt-5-chat-latest actually follows my system prompts better and performs well in my conversation-style task. That’s why I’m hoping to use it in my formal experiments with human participants.

I haven’t tested it yet through direct API calls, but if the same error appears as it does in the Playground, it could cause significant problems for my study…

Hi there, thank you for your reply! Yes, it’s a bit odd—the error message doesn’t appear consistently, but does show up occasionally. For example, after a few rounds of back-and-forth, it could just pop up, then if I wait a bit, things start working again. I’m not sure whether other models exhibit similar behavior, but I’ve noticed this pattern with gpt-5-chat-latest.

Compared to other models, gpt-5-chat-latest is better at following my system prompts and performs well in my conversation-style task. That’s why I was hoping to use it for my formal experiments involving real human participants.

I haven’t tested this via direct API calls yet, but if the error message behaves the same way as it does in the playground, it could cause significant trouble for my study…