Hi. I need to make a specific pipeline using ChatGPT.
So far it seems like most runs go +/- well, but…
Sometimes things become broken.
Due to the fact, it only happens with a low chance it seems I can run the same prompts a few times and then compare probabilities estimated by a language model for such an answer.
- for the previous non-chat models (like
davinci-003) I had the option to generate a few answers and than choose one of them (either by
best_ofparameter or generating a few samples, then calculating the overall probability for every text from
- for chat models I only see
nparameter which generates the few responses. But I see neither something like
best_ofon the OpenAI side nor an option to return probabilities for such answers (well, tokens, to be more precise). Neither I see any mentions about resorting of these
So is there no way to do what I need? neither best_of on the OpenAI side nor an option to do it on my side?