GPT 3.5 turbo - Is fine-tuning coming? Please respond

Are there plans of allowing fine-turning of GPT 3.5 turbo?

If so, can you guys give us a ballpark/non-official/guess estimation of when? If not, just say no, and I can use the current existing models.

GPT 3.5 is easy to manage, but the answers are not super reliable and prone to errors that are easily solving by fine tuning.

Please respond, so everyone could plan for your roadmap.

Examples of annoying errors:

asked for the output in JSON: …[“Operating Expenses”,(1927.2),(1778.1),“8.4%”,(5885.4),(5273.9),“11.6%”]…
(5885.4) is not a number, it should be either -5885.4 or “(5885.4)”.

Other problems: “Unreliable summary length”

2k tokens prompt is being converted to 100-500 token lenght in a summary (same prompt).

“Please Respond”
very useful. I am sure devs will read this and get back to you very shortly.
(5885.4) actually is a number. Accountants surround numbers in parens to indicate they are negative.
You need to provide stricter prompting to get consistent results for JSON & summary tasks. Explicitly show examples of correct and incorrect output.

Write a script to generate a few hundred slight variations on instructions and prompt so you can narrow in on which prompt is most stable.
You should also try setting temperature to zero. That way you know GPT is always picking what it thinks is most likely, which is very helpfull when figuring out prompts.

1 Like

Hi Chris,
Thanks for the tip on temperature and variations.