Curie finetuning

Hi All,
I’m working on a research project to fine-tune GPT-3 on the sayings of Jesus to see what sort of results I get. This is fashioned along the “company voice” example. I have a dataset of Jesus’ sayings in multiple translations that are about 69k. However, when I tried doing a ft with Curie, I noted that the responses I got were not great. Curie veers all over the place and doesn’t seem to follow the thread of the conversation, moreover I expected more language from the ft training set to occur, but it doesn’t seem to very often. I wonder if there are better ways to train curie for this, I just used the default values. Or whether I need to step up to Davinci (though assuming the cost is 10x what Curie was, cost would be prohibitive).
Thanks
Randy

Would need to see some samples of your training data and results to provide feedback.

1 Like