No probabilities for ChatGPT API responses?

Hi. I need to make a specific pipeline using ChatGPT.

So far it seems like most runs go +/- well, but…

Sometimes things become broken.

Due to the fact, it only happens with a low chance it seems I can run the same prompts a few times and then compare probabilities estimated by a language model for such an answer.

But:

  • for the previous non-chat models (like davinci-003) I had the option to generate a few answers and than choose one of them (either by best_of parameter or generating a few samples, then calculating the overall probability for every text from logprobs)
  • for chat models I only see n parameter which generates the few responses. But I see neither something like best_of on the OpenAI side nor an option to return probabilities for such answers (well, tokens, to be more precise). Neither I see any mentions about resorting of these n outputs.

So is there no way to do what I need? neither best_of on the OpenAI side nor an option to do it on my side?

5 Likes

The lack of logprobs also obstructs using newer models (GPT3.5/4) with Forward-Looking Active REtrieval augmented generation (FLARE)

Newer topic about this: #104769