Saving tokens on system prompts

I want to know if there is a way to fine tune GPT-3.5 model where I do not need to provide system prompt every time because it is same in my use case. This system prompt is very large so uses many tokens. I want to save tokens on system prompt. I also want to save tokens during fine tuning as it appears I have to write system prompt for each example message in data set.

1 Like

You can remove the system prompt from each training message if you wish, but you will get better performance if you include the system prompt that will be used in your usually API calls.

Perform some tests with a limited dataset to evaluate the performance and see if it meets your standards.

1 Like

Thanks for your reply. I am thinking of this system prompt as a knowledge base for the model to use while responding to user. That’s why it may be longer in my case. What are the ways to teach model with my context specific knowledge? I am trying to teach a specific way to respond instead of showing bunch of examples.

If you have many examples of how to respond to a given question then you can try fine tuning, however, if your context is large, factual and not a “style” then embeddings will be the best choice, if your context is small… inclusion in the prompt itself is simplest.

1 Like