New Fine-Tunning logprobs

Dear Community,

I have been working on the automation of categorization in a help desk system. Until now, I had opted to use the fine-tuning technique, specifically employing the Curie model due to its cost-benefit ratio.

However, with the recent update of the fine-tuning system, I am facing some uncertainties. In the previous version, I had the ability to access logarithmic probabilities (logprobs), which was extremely useful. We used this information to calculate thresholds that significantly increased the efficacy of the product in which we were implementing the technology. Unfortunately, I cannot find this functionality in the new version of fine-tuning.

Secondly, according to the updated documentation, only three models will be available:

  • gpt-3.5-turbo-0613 (recommended)
  • babbage-002
  • davinci-002.

I would like to know if the training process for the babbage-002 and davinci-002 models is identical to that of the gpt-3.5-turbo-0613 model.

I would greatly appreciate any guidance or help you could provide me on these matters.

Best regards.

The completion models support logprobs. This can still be explored in the API playground.

There is zero documentation on the completion model training, endpoint differences, or why it would require a ChatML-enforcing training file. If they can be reached by the old endpoint for now. Exploration is the only way to answer.

1 Like