I have an app that is using GPT-3 to do summarization. To improve the summarization in different languages I do a few shot prompting for each summary in each language. But that becomes a lot of tokens for each summary. And expensive!!
I would like to to have a preparation app to create a lot of summaries in one session and then feed that result into fine-tuning and then use that fine-tuned version for my “real” app.
Have anyone tried this? Would it get cheaper?
If it works it could be a new feature of the OpenAI interface!!
Hope to hear from you