We would like to fine tune a GPT model with our own data.
We are a platform where our clients have a conversation with a companion trained by us. The conversations are about situations from everyday life that the client has experienced and how they can behave better.
However, the companion conversations follow simple rules that we want to train an AI on.
I have now already read up quite a bit on fine tuning.
The companion calls follow several steps. (Greeting, find project, actual scene, paraphrase, implementation.) Now my idea was, if we can train several models on their special step and run everything together at the end.
Since we have been doing this for many years, we have a lot of audio files that we have already started to transcribe. To create the training file from them.
Do you think this is possible? And do you have any tips? We will try it on our own for now but maybe there is a professional we can give the job. Thanks!
Fine tuning is best used to teach the AI new ways of thinking and new patterns to find and match against, it is not a good way of teaching the model new data, additionally, you are limited to the base models for fine tuning, the most advanced of which is davinchi-003, this model is not as powerful as GPT-3.5-Turbo or GPT-4, worth bearing in mind.
The best way to allow the model access to new data is with embeddings, these allow large amounts of text to be encoded into a database and then recalled to later use by semantic search. (documentation can be found here : OpenAI Platform
Regarding your transcription endeavours, you might find the Whisper API of use, as this can accurately and quickly transcribe your notes in many languages, very quickly, for a low cost compared to human transcription. (blog entry and link can be found here : Introducing ChatGPT and Whisper APIs)
But that is exactly what it is supposed to achieve. The AI is supposed to think like a companion and its main task is to mirror our client (repeat key words from what he said so that he becomes aware of it himself) and guide the conversation. I would also have a lot of examples of this, where we could put it in this format {“prompt”: “xxx”, “completion”: “yyy”}
You mean it would be better if we embed the transibed conversations? But the A.I. is not supposed to answer questions, but to work with the client.
Thanks!
It’s a super interesting and really important area of study, this is all totally new, you know as much as anyone else, probably more! Feel free to make use of the minds and support from this forum and ask whatever questions you think will help you. I can’t guarantee answers that will make you smile, but I can promise you everyone will make you think!
Thanks for your reply, yes it is a new topic, true.
Do you know if it is possible to train several models individually and then use them together? So that first that one model operates, then the second module takes over, but it continues on the same chat.
It sounds like a reasonable proposal, time to think about how you trigger one AI to step in when the other finishes, what those triggers look like, how to make them repeatable, that sort of thing.
On a technical level there is nothing stopping you from making calls to differently trained models from within the same application.
If I train the AI over the respective datasets. Could I say in the data at the end of the respective stage, if the prompt is “…”, the stage is over, and then it is somehow triggered at the answer that the second model begins.
Would that be possible? Or can you think of an easier idea?
Yes, you can tell the AI that it should use a key word at a certain stage, but I will say that the only model that would do that reliably over time is GPT-4 and that cannot be fine tuned.
Hello Johannes!
Do you have a hard criterion that you use when determining that fine tuning a model is the only way forward? Put differently: Even if you spend a lot of effort to engineer prompts and get examples for your agent from your database it would still very likely be faster and less costly.
No, that was just what I heard and sounded good.
You think it’s better if I use the embedding function?
Can I then simply upload transcripts of entire conversations and he understands it? And embedding would also work with gpt-3.5 or 4?
I mean @Foxalabs has brought up many good reasons why fine-tuning as suggested in your original post may be a challenging task and from what I can see it’s likely the wiser choice to first look into prompt engineering and some form of database retrieval (vector embeddings are just one possible solution) for few shot learning and to get the correct context for the reply. If this approach should not yield the required results you can still go ahead and fine tune a model.
For your purposes and from my experience, my question would be: who will the end-user of product /service you’re offering?
Is it your client’s end-users? Or is this for your client’s internal usage?
If the solution you’re building is for your client’s end-user, you may not need to do as much work to obtain the desired result as you and many other think.
In our experience a lot of things can be solved right in the prompt. Check out how we are able to train a customer facing, on-brand bot for clients in 60 seconds on ChatGPT Builder’s YouTube acc
I think the way we’re doing this is exactly the method OpeanAi wants us to—as a matter of fact, I’m certain I just saw a video where Greg Brockman mentioned one of our recent demo bots related to the “Preacher in your pocket” concept from back in December.
Take a look at Semantic Kernel solution. They suggest “skills” plugins in prompts where “skills” are just placeholders for perprocessor that may call anything from weather report to write a poem with another model.