Null finish_reason on gpt-4-vision-preview model

When making a request for the gpt-4-vision-preview model using the OpenAI python library I noticed that the finish_reason is null. Instead there is a finish_details object, which seems to be a replacement for finish_reason

The library types do not indicate this new finish_details object. Nor does the API docs indicate this change

I couldn’t replicate it with another model e.g gpt-4 (turbo or otherwise)

So what’s the expectation here, to continue using finish_reason or rely on finish_details?

Also… please try not to make a breaking change e.g type change, in your API schema design without proper API versioning. Ideally for developers you maintain back-compatibility, unless absolutely necessary

Request body I used:

{
  "model": "gpt-4-vision-preview",
  "messages": [
    {
      "role": "system",
      "content": [
        "You are ChatGPT, a large language model trained by OpenAI capable of looking at images.\nCurrent date: 2023-03-05\nKnowledge cutoff: 2022-02"
      ]
    },
    {
      "role": "user",
      "content": [
        "Describe this image for me.",
        {
          "type": "image_url",
          "image_url": {
            "detail": "low",
            "url": <some url>
          }
        }
      ]
    }
  ],
  "max_tokens": 256
}

Response I get:

{
  "id": "chatcmpl-8IHvXLeFj1fiaxKiYPFJpipihiVIL",
  "choices": [
    {
      "finish_reason": null,
      "index": 0,
      "message": {
        "content": <some content output>,
        "role": "assistant",
        "function_call": null,
        "tool_calls": null
      },
      "finish_details": {
        "type": "stop",
        "stop": "<|fim_suffix|>"
      }
    }
  ],
  "created": 1699369507,
  "model": "gpt-4-1106-vision-preview",
  "object": "chat.completion",
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 101,
    "prompt_tokens": 141,
    "total_tokens": 242
  }
}
4 Likes

This is also an issue in the Node.js / Typescript library (as of version 4.18.0).
GitHub issue: ChatCompletionStream.fromReadableStream errors due to missing finish_reason for choice · Issue #499 · openai/openai-node · GitHub

So I just tested a simple prompt again using GPT4v (just one message and a text and image content) and this time around looks like finish_reason is populated with a stop

@petrgazarov Are you seeing the same thing on your end?

The Node library now works properly. This has been at the API according to the Node library maintainer.

1 Like