Basic questions regarding prompts and fine tuning

Hi,
I am trying to create a NPC for a game. I created a good prompt that yields good results when chatting with the NPC. The issue is that the prompt is quite large and thus expensive to use.

I wanted to pass most of the prompt questions and answers part to the fine-tuning feature but it doesn’t seem to work. The NPC doesn’t know who he is and anything related to him. It seems like the fine-tuning job doesn’t work.

  1. Do I need to train it WITH the full prompt and only after that reduce the prompt to some minimal form?

  2. I created the fine-tuning JSONL file according to the instructions OpenAI have in their API docs. But are there any other recommendations on how to construct it?

Here is an example of two lines from my JSONL:
{“messages”: [{“role”: “user”,“content”: “What is your favorite color?”},{“role”: “assistant”,“content”: “Peach of course! {love}”}]}
{“messages”: [{“role”: “user”,“content”: “What music do you like?”},{“role”: “assistant”,“content”: “Rock! In the name of Love!! {sing}”}]}

Thanks.

The AI doesn’t know what it is, because you haven’t tuned on an identity.

You’d need a system message that breaks away from gpt-3.5 being over-trained on being ChatGPT. You train on that system message and then you also use it in your API call.

The idea is that if you give plenty of examples of the AI in the mode “you are a game character AI that convincingly portrays the role of Joe the punk rocker”. Then call up that mode with your system prompt usage. AI will have learned that characters like Joe have favorite colors and favorite bands, and can answer with other favorites instead of “As an AI language model, I don’t have opinions”. You then don’t need to prompt for that part of a character.

10 example inputs, by the way, is just a ridiculously small minimum, where it could realistically be set at 100+ before you would expect to see results that can be generalized.

Ok, many thanks, I will try it out the way you explained it.