Use output from GPT-3 to do fine-tuning

I have an app that is using GPT-3 to do summarization. To improve the summarization in different languages I do a few shot prompting for each summary in each language. But that becomes a lot of tokens for each summary. And expensive!!
I would like to to have a preparation app to create a lot of summaries in one session and then feed that result into fine-tuning and then use that fine-tuned version for my “real” app.

Have anyone tried this? Would it get cheaper?

If it works it could be a new feature of the OpenAI interface!!

Hope to hear from you

1 Like

It should work. You could automate a few shot process on 200 examples, capture all the responses, and use that for your fine tuning