Hi. I need to make a specific pipeline using ChatGPT.
So far it seems like most runs go +/- well, but…
Sometimes things become broken.
Due to the fact, it only happens with a low chance it seems I can run the same prompts a few times and then compare probabilities estimated by a language model for such an answer.
But:
- for the previous non-chat models (like
davinci-003
) I had the option to generate a few answers and than choose one of them (either bybest_of
parameter or generating a few samples, then calculating the overall probability for every text fromlogprobs
) - for chat models I only see
n
parameter which generates the few responses. But I see neither something likebest_of
on the OpenAI side nor an option to return probabilities for such answers (well, tokens, to be more precise). Neither I see any mentions about resorting of thesen
outputs.
So is there no way to do what I need? neither best_of on the OpenAI side nor an option to do it on my side?