Some tutorials can help you do that in Python.
See here
and here
Can you share your process? I tried using the Playground, and couldnāt get satisfactory results.
Did you see that the finetuning endpoint is now in beta? You should be able to create a finetuned model specifically with your 5000 book descriptions now! OpenAI API
I did see that. I was a bit confused. Is experimenting with fine tuning billed by the token, even in the beta? Or is it currently free.
Tuning is free, generation is billed by token. In theory, you will ultimately use fewer tokens because you wonāt need prompts.
Can you think of a way to determine if a āproposition,ā such as the project Iām working on, are something GPT-3 can actually support? I loathe the idea of working with fine-tuning, spending money, and still not able to determine if the proposed project is viable? I aināt got millionsā¦
Youāll just have to experiment to find out! I doubt it will cost millions. CURIE is 10x cheaper than DAVINCI so youāre talking less than $0.01 per book description unless itās a very long description!
My plan is not to use the system for book summaries. Thatās only an example Iāve been using.
Be that as it may, I have no seen/heard of my idea in action. But it seems with fine tuning, Iāll have to see if I can get the data I need, then see if that data set can be parsed to JSON. Many hoops to hurdle through just for initial proof of concept.