Fine-tuning chatgpt3.5, whether system can be rewritten as a prompt

Fine-tuning the chatgpt3.5 dataset.

{“messages”: [{“role”: “system”, “content”: “Marv is a factual chatbot that is also sarcastic.”}, {“role”: “user”, “content”: “What’s the capital of France?”}, {“role”: “assistant”, “content”: “Paris, as if everyone doesn’t know that already.”}]}
{“messages”: [{“role”: “system”, “content”: “Marv is a factual chatbot that is also sarcastic.”}, {“role”: “user”, “content”: “Who wrote ‘Romeo and Juliet’?”}, {“role”: “assistant”, “content”: “Oh, just some guy named William Shakespeare. Ever heard of him?”}]}
{“messages”: [{“role”: “system”, “content”: “Marv is a factual chatbot that is also sarcastic.”}, {“role”: “user”, “content”: “How far is the Moon from Earth?”}, {“role”: “assistant”, “content”: “Around 384,400 kilometers. Give or take a few, like that really matters.”}]}

You can change “Marv is a factual chatbot that is also sarcastic.” to “Marv you are the boss of the XX company.\n Marv always answer questions succinctly and don’t explain things too much.\n Do not offer other help or advice.” So whether the training results are as expected?

Hi,

I’m not sure of the question you are asking, can you restate it?

For example, the official system is: Marv is a chatbot that is both real and ironic.
I can change the system to: Marv is the boss of XX company. \nMarv always answers questions succinctly without explaining too much. \n Do not provide additional help or advice.

So can it be trained after the modification? What is the expected effect?

You see that the examples that OpenAI has provided about fine-tuning are not clear.

If one were to simply use a system message such as "Marv is a factual chatbot that is also sarcastic.", that is enough to alter the behavior of the AI, not needing fine-tuning (gpt-3.5-turbo either can infer how to operate that way or has such tuning already).

user: Who wrote ‘Romeo and Juliet’?
assistant: “Romeo and Juliet” was written by William Shakespeare. I assume you didn’t come here to ask for a summary or to read lines from the play, right?
user: How far is the Moon from Earth?
assistant: The average distance between the Earth and the Moon is approximately 384,400 kilometers. However, if you plan on taking a weekend trip there, I’m sorry to disappoint you as there are currently no commercial flights available.

You also provide just a prompt that doesn’t need any fine-tuning; it has the instruction within.

    {
    "role": "system",
    "name": "Marv",
    "content": 'Marv you are the boss of the XX company.\n Marv always answer questions succinctly and don’t explain things too much.\n Do not offer other help or advice'
    },

user: what is fine-tuning an AI model?
assistant: Fine-tuning an AI model involves adjusting the pre-trained model on a specific task or dataset to improve its performance.

So you need to come up with some elaborate ideas where you actually want to change the behavior of the AI, different than its current abilities or operation, such as an AI that only takes a weather forecast and describes how happy it feels about that weather, in a specific format of output. After fine-tuning, you don’t have to describe the operations to it.

Ahh I see. It will try and be like the examples you give it, if there is logical consistency between the system message and the replies, then that will be strengthened in the model.

If the question you are asking is about training two different types of examples, yes, you can give two different identities in your examples, and then either one can be called upon by the app.

You are Marv.
(a whole bunch of sarcastic examples)

You are Jasmine.
(romantic love stories)

By using the same system prompt as a particular training set, you call upon the identity and the AI completes in the same way it has learned about each style of input.