Dear all,
Junior AI tamer here.
I hope someone can help mi with this. I started to fine tune a model based on the API documentations, and I was really happy as the fine-tuning process works as intended.
I used only a short .jsonl, with 4 lines only. Issues:
- In the playground if I ask a specific prompt, which is in the training data the fine-tuned model answers properly but it also hallucinates other 5-6 lines of text, which I was not giving in. Why is this?
- Should I add more training data? In the docs there was at least a 100 line long jsonl. Below thin number I can not expect proper behaviour?
- Can I limit the token umber somehow related to the answer? Maybe if I limit the token no. it will answer only whats needed, and skip the hallucinations.
My other questions if I may.
- How can you give the fine-tuned model “personality” like you do in Chat GPT when you say “Act like …”
- Which models are the best in price / effectiveness ratio?
- Can you fine tune a gpt-3.5?
Thanks!