While I fully support @dignity_for_all’s suggestion, I also recalled a similar question from a user some time ago. At the time, I shared a few ideas for the training data composition. I did not hear back from the user in question, so can’t confirm if the approach ended up being successful. For what it’s worth, I am sharing the link to the post anyway:
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Conditoning GPT4 API on my specific use case | 4 | 1276 | March 28, 2023 | |
Building a chatbot using gpt-3.5 turbo: Is there a way to ensure that chatbot strictly adheres to the specified domain? | 7 | 1515 | October 6, 2023 | |
Avoid certain responses and prompts and generate responses as per my input | 9 | 2098 | March 6, 2024 | |
Fine-Tuned Models to Strictly Follow Instructions | 6 | 281 | June 10, 2024 | |
Avoid overfitting during the fine-tuning of gpt-3.5 turbo | 4 | 2785 | December 21, 2023 |