What are the costs of fine-tuning


I have been a user of ChatGPT for some time, we offer coachings on a moodle platform and would like to know what it costs to re-train such a GPT model. We have an API interface and an API key. If we want to train the GPT-4 model (not yet available for fine-tuning) for a specific area, what would be the cost? On the Pricing page the whole thing looks manageable. How expensive could it get if I train GPT-3.5Turbo or GPT-4 (which one is advised to use) with one to a few books on a specific data set, e.g.?

Does anyone have any experience with this? I’ve heard that this can get quite expensive (up to $20,000/month) using Azure IT infrastructure for example.

Alternatively there is prompting but as far as I know that comes at the cost of context length so this is less flexible!?

This all largely depends on your specific usage amount. Fine tuning is a one time cost, depending on the size of your data, the model you choose, and other factors. After that you pay per request based on your usage.

We don’t know what the prices will be for fine-tuned GPT, and won’t know until it’s released later this year.



Thanks for the reply!

Our model will be GPT-4, amount of data not that much, maybe 1-3 books from the domain, or 100-300 questions with answers. We need a roughly estimate for the fine-tuning process.
The usage afterwards from customers is listed on OpenAI Pricing Page

Did your company fine-tune alteady?

Maybe you better use embedding method, it’s far cheaper and really easier than preparing a fine-tune model, and you can interact it with GPT-4. Maybe you can try if the result is enough for your use case.

1 Like