Can I fine tune without specifying an answer through the "assistant" role?

I want to fine-tune a model so that it is more familiar with various notions from a certain product. This consists pretty much in me throwing a whole manual into the model.

The documentation uses the “system” / “user” / “assistant” format to exemplify training. But what if I fine-tune using just “system” prompts? For example:

system: “You should be able to answer questions about any of the following notions.”
system: “Text from the manual, first chunk”

system: “You should be able to answer questions about any of the following notions.”
system: “Text from the manual, N-th chunk”

You are likely going to want to use RAG to store information about your product.

The documentation specifies the intended outcomes from fine-tuning, and knowledge training isn’t one of them.

That said, I would love to see an experiment here.

The AI is a token production machine, and the re-weighting of fine-tune changes the likelihood of what it will produce.

If you were allowed to train with a null assistant role, there would be no correction and learning to align the output to an input, no way of stimulating that production. You’d essentially be training that after any input message, and then the unseen “assistant” prompt where the AI is supposed to write, the correct response is a stop token.

However the internals of the transformer language model is a continuous open-ended context, so fine tuning happens nevertheless. You can put whatever context you want to tune on into a completion model that doesn’t enforce the container either in training or in use. If you then supply half of that input to the tuned AI, it will infer the rest that it should write from the fine-tune. If you have no completion to elicit, though, you are basically then teaching it language, which it already knows from a million times more data learning than your fine-tune.

Hey mate,

Short answer. No.

You could create the system prompts, create a user prompt, submit the prompt, get the completion and create a dataset which can be used to fine-tune your model. I am also very curious if the strategy of providing n-th chunk into the system prompt will produce the desired outcome…

However, I highly recommend you just integrate the chatModel with a vector database. This will just allow you to query the information relevant from the manual. To do this you will need to chunk the manual into sections. VdB are excellence, if you haven’t used them before.

Bets are you can get the same results you are aiming for with a VdB and skip the whole fine-tuning.

I play around with legislation, which is obviously very large, and this is the method I use to create memory from my chat.

If you don’t know how to work with a VdB, I have a dummy node.js class and I can walk you through setting one up on Pinecone.

Good Luck.

@jaydenclose1234 what is VdB? I only find some stuff related to graphical rendering (Voxel Database).

@zenz would you combine a RAG model with OpenAI, or just skip LLMs altogether?

_j thanks for the insights! :slight_smile:

@openai
Vector Database. (I don’t think I should use VdB im just lazy at typing)

I think @zenz and I are trying to suggest the same thing, though I’d assume he has a better grasp of the concepts from his use of language.

Here is a nice little article on RAG

If you want to just be able to easily search and query the manual and provide excerpts verbatim, you don’t need to integrate it with an LLM.

If you want it to be able to understand different sections of the manual simultaneously and synthesise answers from this, I would use the LLM.

1 Like