I just started seeing this, using latest version of aichat.
I tried linking to the repo and relevant file with calling code, but was prohibited by an overzealous discourse bot / policy
I just started seeing this, using latest version of aichat.
I tried linking to the repo and relevant file with calling code, but was prohibited by an overzealous discourse bot / policy
I’ve been seeing this issue since 08/03/24, from around 19:00 UTC (roughly). It’s also been constant today, I get maybe 20 requests then a crash.
I’m using python 3.10, openai-0.27.8 and gpt-4-1106-preview.
I don’t think its an issue with my prompts/code. I was able to pump ~2.5 million tokens through the same code earlier on 08/03/24 then something seemed to go wrong in the evening.
Just updated our python 3.11 project to the latest openai package release v1.13.3 and am still seeing this error randomly with models gpt-4-0125-preview and gpt-4-1106-preview. I was seeing it with the previous package 1.3.5 version we were using too.
Maybe an important note: we are using both gpt-4-0125-preview and gpt-4-1106-preview for different requests.
I faced the same issue.
I changed the model name from " gpt-4-0125-preview" to “gpt-4-turbo-preview” and it looks like it solved the issue.
Can you guys try the same?
we are having the same exact issue. also the quality of these models have been very iffy since yesterday. there is something going on…
Are there any tool calls involved? The implementation changed when open ai switched to support multi function calls, maybe they dropped old format ?
This worked for me. Maybe they are making updates to the model names internally?
Seeing the same error using gpt-4-1106-preview
, I tried with gpt-4-turbo-preview
as someone suggested above and that didn’t help.
Why has openAI not open an incident?
Has someone find a workaround besides retrying?
I’ve seen it via OpenRouter recently (last few days) … Very intermittent and next call goes through…
gpt-4-0125-preview
API callsException:
BadRequestError: 400 You requested a model that is not compatible with this engine. Please contact us through our help center at help.openai.com for further questions.
npm package version: openai@4.28.4
const response = await openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages: [
{
role: "user",
content: prompt,
},
],
n: 1,
temperature: 1,
});
Having this issue too, with “gpt-4-turbo-preview”, no function calls. It runs fine for many calls but occassionally pops this error message. It’s bad because I’m running a sorting algorithm and have to start this recursive process from the beginning to get the result. Many requests I did were waisted. Now included catching this error (“openai.BadRequestError”) and hope this helps.
Interesting observation: My prompt forces an output that - effectively - should result in either “A” or “B”. This prompt worked as intended throughout all calls until the first occurrance of this error message. When restarting my sorting algorithm, the prompt occassionally resulted in answers like “undecidable” when comparing two data items - fine for my sorting algorithm but this was a new “emergence”. This made me wonder if OpenAI is changing model iterations or model instances in the background.
FYI - we see these errors stopped at around 3 pm Eastern today.
I am still getting same error message. And it also don’t respond.
I think we need wait more to be fixed.
This appears to be a sporadic error when using a valid model. I saw it for the first time today. My code goes into retry mode on an error from OpenAI. It succeeded on the second attempt -
OpenAIServicesBase - An exception occurred: Type: BadRequestError, Value: Error code: 400 - {‘error’: {‘message’: ‘You requested a model that is not compatible with this engine. Please contact us through our help center at help.openai.com for further questions.’, ‘type’: ‘invalid_request_error’, ‘param’: ‘model’, ‘code’: None}}, Traceback: Traceback (most recent call last):