How can i use my fine tuned model in openai.chatcompletion, will it accepts fine tuned models? If it accepts fine tuned model will it answer queries related to that document only?
sps
2
Welcome to the community @mudumbasashank
This is a very interesting question.
Currently fine-tuning is supported for only base models which are accessible only via the Completions endpoint, hence the fine-tuned model will also be accessible only via the Completions endpoint
UPDATE:
Fine-tuning on chat completion models is now available on gpt-3.5-turbo. Here’s docs
1 Like
but that completions endpoint wouldnt be like a chatting right? how can I have that chatting experience with my own data?
sps
4
You can use gpt-3.5-turbo as the completion model and have "chat" experience.
1 Like
Yasiji
5
can you please elaborate more, I fine-tuned multiple models based on text Davinci using the resources that open ai provides from structuring the data to having a prompt/completion model ready to use, now I want to have this chat experience in my model, I know for a fact that I need to switch to the gpt-3.5-turbo model to achieve this, but I still don’t understand the data formating aspect, in the documentation they said for a chat model we have to use this formatting:
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
here where I get confused, I prepare my data based on this syntax on a txt file, when I do upload it to the open AI CLI data preparation tool, it gives that the data is a prompt/completion syntax and that I need to remove the “role”…etc.
Am I doing this the right way or there is something else ?
sps
6
This is the structure of messages parameter of the chat completion request, it isn’t going to work for fine-tuning. Only prompt-completion pairs are supported.
UPDATE: Fine-tuning is now available for gpt-3.5-turbo chat completion model. Here’s docs
1 Like
Yasiji
7
thank you for your time answering this message, really appreciate it !!
{“prompt”: “How do the different types of drawings work? ->”, “completion”: " I do not know Please ask me on Personal loan\n"}
this is the jsonl , if i set max token 500
it is using all the tokens 500
how should i stop the answer
for the above ,stop sequence is \n or \n
Same finding here: can only fine tune prompt-complete. Fine tuing chatCompetion is not available.
_j
10
In the four months since this topic concluded, fine-tune of gpt-3.5-turbo has been made available. A model that must be accessed through the chat completion endpoint.