Gpt-4-vision-preview `finish_details`

I’m trying out the gpt-4-vision-preview and no matter what I do I get back truncated output at around 50 char. The finish_details field says max_tokens. However in the response details I can see I am well below the maximum tokens for this request. Anyone else have this issue?

8 Likes

Yes, I am experiencing the same issue. Might it be a limitation now while they roll it out?

1 Like

Setting max_tokens in the request gets around the issue.

4 Likes

I have set “max_tokens: 4096,” which is the maximum, but it’s still cutting the response off at around 50 characters. How strange. What have you set it to?

I set it to 1500, which is enough for me:

  model: 'gpt-4-vision-preview',
  max_tokens: 1500,
1 Like

omg thank you, I was going crazy.

Using :

if (currentModel === 'gpt-4-vision-preview') {
            requestPayload.max_tokens = 4096;
          }

fixed my discord bot. I don’t know why, by default it seems to be at 16 tokens, which is kinda ridiculous.

3 Likes

I also experienced an issue similar to this. Reponses of gpt-4-vision-previewwas truncated to 31 tokens with no reason and finish details type was ‘max_tokens’. I set ‘max_tokens’ parameter of request to 4096 and it works well now.

i think this is a bug, There is a problem with the default values of this model

Any update on this issue? I am also facing the same.

The update is documentation:

https://platform.openai.com/docs/guides/vision

Currently, GPT-4 Turbo with vision does not support the message.name parameter, functions/tools, response_format parameter, and we currently set a low max_tokens default which you can override.

Why do they set a low default value that could truncate the typical response? Life’s little mysteries.

2 Likes