Successful replies from 3.5 with a specific prompt, but not from 4 (instead, I get a 503 error)

It’s a <1000 token prompt so not a token limit. I’m simply changing the model being used from 3.5 to 4 and boom, 503 errors.

I have other uses of 4 no problem, have access. You’d think if it works on 3.5 that 4 would have no issue with the prompt, but it does.

Any ideas?

Welcome to the forum.

I would check out model endpoint compatibility page. There’s a difference between completion and chat completion endpoints.

What exact two models are you trying?

Thank you. I checked. /v1/chat/completions is what I’m calling in both cases, simply switching model: “model”: “gpt-4” vs “model”: “gpt-3.5-turbo”.

It’s a classification prompt easily handled by 3.5 every time, so I wouldn’t think it’s ability or prompt related specifically.

1 Like

Hrm. Not sure then. We’ve had a couple outages in the last day or so. Typo on the model name maybe? Does any GPT-4 model work?

The newest model from yesterday works but it’s doing something wonky in the response. Definitely not outage as I’ve seen this over the past few weeks in testing this specific series of prompts and it’s so bizarre.

Was just checking here before I go rebuild all these steps from scratch to find the issue in case it might be something more obvious like what you suggested. My guess is it must be some character getting passed in during one of the steps that’s throwing off the whole call and breaking it to the point where it can’t even return a 400.

1 Like