I have a question related to the fine tuning of model. if i fine tune a model and my fine tuning data trainig data looks like this:
“user: identify whether the following is an objection or not"i am not interested in buying a car”
assistant: Yes, its an objection raised by customer during the purchase of car"
Now I am confused that after fine tuning will I have to give this the exact prompt of “Identify whether the following is an objection or not"i am not interested in buying a car”
will the fine tuned model work if i give prompt in the following way:
Summarize the text given below. Also identify whether the following is an objection or not.
“I am not interested in buying a car”
text given: ou’re welcome! If you have any
further questions or need additional information, feel free to reach out. We
look forward to helping you find the perfect car.
This conversation is a basic example and can be adjusted based
on the specific context, customer preferences, and the seller’s approach. It’s
important to be attentive, address customer needs, and guide them through the
purchasing process.
I have a question related to the fine tuning of model. if i fine tune a model and my fine tuning data trainig data looks like this:
“user: identify whether the following is an objection or not"i am not interested in buying a car”
assistant: Yes, its an objection raised by customer during the purchase of car"
Now I am confused that after fine tuning will I have to give this the exact prompt of “Identify whether the following is an objection or not"i am not interested in buying a car”
will the fine tuned model work if i give prompt in the following way:
Summarize the text given below. Also identify whether the following is an objection or not.
“I am not interested in buying a car”
text given: ou’re welcome! If you have any
further questions or need additional information, feel free to reach out. We
look forward to helping you find the perfect car.
This conversation is a basic example and can be adjusted based
on the specific context, customer preferences, and the seller’s approach. It’s
important to be attentive, address customer needs, and guide them through the
purchasing process.
When fine-tuning a model, the prompts you use during training can have a significant impact on the model’s performance. The model learns to respond based on the prompts and responses it sees during training.
If you change the prompt significantly when using the fine-tuned model, it may not perform as expected. In your case, you’ve trained the model with the prompt “Identify whether the following is an objection or not”.
If you change the prompt to “Summarize the text given below. Also identify whether the following is an objection or not”, the model might not perform as well because it hasn’t been trained on this specific prompt.
However, there are a few strategies you can use to improve the model’s performance:
-
Include the original prompt in every training example : OpenAI recommends including the set of instructions and prompts that worked best for the model prior to fine-tuning in every training example. This can help you achieve the best and most general results, especially if you have relatively few training examples [source (https://platform.openai.com/docs/guides/fine-tuning)].
-
Use separator sequences : If you have enough training examples, you can fine-tune a custom model without instructions. However, it can be helpful to include separator sequences (e.g.,
->
or###
or any string that doesn’t commonly appear in your inputs) to tell the model when the prompt has ended and the output should begin [source (openai-cookbook/articles/how_to_work_with_large_language_models.md at main · openai/openai-cookbook · GitHub)]. -
Use the same prompt as part of the system message : If you want to replicate a specific behavior in the fine-tuned model, you might need to use the same prompt again as part of the system message [source (System dialog box remains empty)].
Remember, fine-tuning a model doesn’t give it new knowledge, but rather learns the writing style you are giving it. If you want it to learn new information, you might need to use other techniques, such as embeddings [source (Does fine tuning improve gpt3.5/4 retrieval speed?)]. Finally, if you find that the fine-tuned model is not performing as expected, you might need to retrain it with the correct data and prompts [source (Fine tuned with wrong data initially)].
I am confused that can i use embedding and gpt 3.5 at same time. I want to make a system that identify objections from the given text. chatgpt does identify some of them for example it identify car is not good as objection from text, but if text contain I am out of time it doesnot point that as objection. I have dataset of objection arround 1000 and i want gpt to check whether the text contains objections from that 1000 objections or not. how can i do this
Ah, you’re talking about Custom GPTs, not the API. I’ve moved your thread to the appropriate topic for you.
What are the instructions you’re using for your Custom GPT? It might just be bad wording there that’s not allowing it to catch “out of time” as an objection.
i want to use the api but i want to check it from my objection list. or i want gpt to learn that the list which i have are also the objection