Gpt-4-turbo-preview 400 error. gpt-3.5-turbo-16k-0613 works fine

Hello,
I have a vue App and a openAiConfig.js with some config.
When I use gpt-3.5-turbo-16k-0613, everything works fine, but wenn I try gpt-4-turbo-preview I only get 400 errors.
Didn’t find any reason in the dokumentation, why it’s not working or that I have to change a lot, to get it work.
Any idea, what’s the reason for that and how to fix it?

Thanks.

const openAIConfig = {
development: {
model: “gpt-4-turbo-preview”,
// model: “gpt-3.5-turbo-16k-0613”,
user_messages: [“Ich bin ein Experte für SEO…”],
temperature: 0.7,
max_tokens: {
seo_meta_title_instruction: 150,
seo_meta_desc_instruction: 150,
spec_short_desc_instruction: 500,
default: 6000
}
},


try {
// Logge die Dauer der API-Anfrage
const apiStartTime = Date.now();
const response = await axios.post(‘…nai.com/v1/chat/completions’, {
model: chatGptConfig.model,
messages: [
{ role: ‘user’, content: input }
],
max_tokens: max_tokens,
n: 1,
temperature: chatGptConfig.temperature,
response_format: {“type”: “json_object”} // To use gpt-4 Turbo
}, {
headers: {
‘Authorization’: Bearer ${process.env.OPENAI_API_KEY},
‘Content-Type’: ‘application/json’
},
timeout: 60000
});

400 errors? That’s a lot of errors! :crazy_face:

You are specifying a default max_tokens that is larger than the model will accept or ever output.

New models are limited in the maximum output they will produce, both in how high you can set that parameter, and in how long they will actually write for.

Soutlion: Try 1500 instead of 6000.

3 Likes

Thanks mate! Didn’t know that. :rofl: :ok_hand:
Now it seems to work.