Get logprobs for prompt tokens, not just for completion?

In the previous API version, when asking for “logprobs=1”, the response contained logprobs for all tokens, including the prompt itself.

For example, if the prompt was “hello world”, and max_tokens was 1, we would get 3 logprobs.

In the current API, the response only contains logprobs for the completion tokens, not for the prompt.

The API documentation is highly partial and confusing regarding this point.

Is it still possible to get logprobs for the prompt token themselves?

Effective in October, echo logprobs on the completion endpoint was no longer available for the gpt-3.5-turbo-instruct model (leaving only the base models).

As logprobs then came to chat completions only after, this policy likely continues for the same internal reasons to the other gpt-3.5 and 4 models.

(You can speculate on what analysis OpenAI doesn’t want done on the perplexity of input text under your control…)

1 Like

Thank you! I wonder why this holds for legacy models as well though.