Can the system role be fixed? Is it possible to inject content of system into a model?

response =
         {"role": "system", "content" : "The report must be classified based on the established research classification system. <Enter institutional research classification system> "},
         {"role": "user", "content": "<Research report contents>"}

I am fine-tuning the GPT-3.5 turbo model. If I use that model, do I have to enter both system and user parameters? Is there a way to inject the system content into the model? The contents of system are all the same, but input tokens are being wasted. Even if I only enter user information, I would like it to classify the report within the classification system.

What I am trying to create is to create a classifier that inputs part of a research report and determines which category it belongs to in a designated research classification system.

For example, there are classification systems such as family-low birth rate, family-care, labor-employment, and labor-career break.

If I comment out system, the text “You are a helpful assistant” is entered as default, so I don’t get the response I want.

So what I want to do is create a model that classifies based on the classification system when only user information is entered. Doesn’t that work? If not, is there a better way?

The input tokens are not “wasted”, they are a part of loading the independing calls to standalone instances of AI models with the knowledge they need to answer.

The context window length of an AI language model must have an internal computed state from processing proir prompt language to then continue to production of follow-up text.

When large amounts of prompt and instruction are needed in large-scale dedicated use, time and money can be invested in fine-tuning a custom model’s behaviors, also more expensive to use. This gives the AI more built in capabilities, but still needs the input language to put it on a path of correctly satisfying a task.