Hello OpenAI team,
I am currently running GPT-OSS:20B within an Ollama container on a Debian system. I would like to know if there is a way to impart system instructions or a code of conduct to the model persistently, so that the model follows them automatically without needing to be provided with these instructions on every single API call.
From my understanding, I can include system instructions in each API request, but I am looking for a solution where I don’t have to repeat them every time. Is it possible to configure GPT-OSS:20B in a way that it “remembers” or internalizes these instructions? If so, could you please explain how this can be achieved?
Thank you very much for your cooperation!