I have been working on the automation of categorization in a help desk system. Until now, I had opted to use the fine-tuning technique, specifically employing the Curie model due to its cost-benefit ratio.
However, with the recent update of the fine-tuning system, I am facing some uncertainties. In the previous version, I had the ability to access logarithmic probabilities (logprobs), which was extremely useful. We used this information to calculate thresholds that significantly increased the efficacy of the product in which we were implementing the technology. Unfortunately, I cannot find this functionality in the new version of fine-tuning.
Secondly, according to the updated documentation, only three models will be available:
- gpt-3.5-turbo-0613 (recommended)
I would like to know if the training process for the babbage-002 and davinci-002 models is identical to that of the gpt-3.5-turbo-0613 model.
I would greatly appreciate any guidance or help you could provide me on these matters.