Are there plans of allowing fine-turning of GPT 3.5 turbo?
If so, can you guys give us a ballpark/non-official/guess estimation of when? If not, just say no, and I can use the current existing models.
GPT 3.5 is easy to manage, but the answers are not super reliable and prone to errors that are easily solving by fine tuning.
Please respond, so everyone could plan for your roadmap.
Examples of annoying errors:
asked for the output in JSON: …[“Operating Expenses”,(1927.2),(1778.1),“8.4%”,(5885.4),(5273.9),“11.6%”]…
(5885.4) is not a number, it should be either -5885.4 or “(5885.4)”.
Other problems: “Unreliable summary length”
2k tokens prompt is being converted to 100-500 token lenght in a summary (same prompt).