Use "private" dataset as basis for AI responses

Some tutorials can help you do that in Python.
See here
and here

2 Likes

Can you share your process? I tried using the Playground, and couldnā€™t get satisfactory results.

Did you see that the finetuning endpoint is now in beta? You should be able to create a finetuned model specifically with your 5000 book descriptions now! OpenAI API

1 Like

I did see that. I was a bit confused. Is experimenting with fine tuning billed by the token, even in the beta? Or is it currently free.

Tuning is free, generation is billed by token. In theory, you will ultimately use fewer tokens because you wonā€™t need prompts.

2 Likes

Can you think of a way to determine if a ā€œproposition,ā€ such as the project Iā€™m working on, are something GPT-3 can actually support? I loathe the idea of working with fine-tuning, spending money, and still not able to determine if the proposed project is viable? I ainā€™t got millionsā€¦

1 Like

Youā€™ll just have to experiment to find out! I doubt it will cost millions. CURIE is 10x cheaper than DAVINCI so youā€™re talking less than $0.01 per book description unless itā€™s a very long description!

2 Likes

My plan is not to use the system for book summaries. Thatā€™s only an example Iā€™ve been using.

1 Like

Be that as it may, I have no seen/heard of my idea in action. But it seems with fine tuning, Iā€™ll have to see if I can get the data I need, then see if that data set can be parsed to JSON. Many hoops to hurdle through just for initial proof of concept.