Is `logprobs` being deprecated, or will it eventually be available for newer models?

Hi all,

Some applications of LLMs involve using them to score, not just generate, completions. For example, we may wish to generate several completions under one prompt, and then rank them according to how likely they are under some other prompt.

This is possible using the logprobs option in the completions endpoint, but is not yet supported in the chat-completions endpoint — the only endpoint through which we can access the gpt-3.5 and gpt-4 model families.

Does OpenAI plan to at some point give API access to the log sampling probabilities of gpt-3.5 and gpt-4? Or to continue to improve the davinci-family models which do support this feature? Or has the organization decided to move away from supporting these use cases?

Thanks!

19 Likes

Bump.
I have the same question. Would like to access logprobs for Chat completions.

6 Likes

same! I would really love to use the logprobs parameter

2 Likes

Agreed! logprobs is super important for probabilistic inference. Most distributions libraries provide a pair of methods .sample() to draw new samples and .logprobs() to evaluate existing samples. On top of these two methods one can build lots of probabilistic machinery. We’d love to build that machinery :slight_smile:

4 Likes

Are there news from anyone who works at OpenAI about this?

2 Likes

The lack of logprobs also obstructs using newer models (GPT3.5/4) with Forward-Looking Active REtrieval augmented generation (FLARE)

There is a recent quote here - they’re working on it

1 Like

logprobs has been released for Chat Completions API! Thank you for your patience and support.

https://platform.openai.com/docs/api-reference/chat/create#chat-create-logprobs

1 Like