Fine Tune on GPT-3.5 Turbo Instruct

Hi everyone,

I’ve come to this forum seeking some clarity on the feasibility of fine-tuning the GPT-3.5 Turbo Instruct model, specifically to improve its precision in calculations. If it’s possible, I’d greatly appreciate any feedback or guidance on the process. Thank you in advance for your insights!

you may want to try tool-calling instead :slight_smile:

while I won’t discourage trying to build a dataset for calculations, but you’ll need to cleverly enumerate the space to avoid over-fitting.

moreover, the real blocker to language models in doing precise calculations is likely the tokenizer which finetuning would not address.

3 Likes

It’s not possible to fine-tune gpt-3.5-turbo-instruct at this time.

3 Likes

If your aim is to improve on mathematical calculations specifically I suggest you look into function/tool calls where the model invokes some type of calculator app and returns the results to the user.

This is akin to using code interpreter via the assistants API.

1 Like