I’ve got a quick question regarding using prompts with fine-tuned models, and I’d appreciate your insights.
I’m curious to know if it’s possible to use a different prompt than the one used during fine-tuning to provide additional instructions and better control the response. Has anyone experimented with this, and if so, could you share your experiences or any best practices you’ve discovered?
For example, the prompt used when fine-tuning
SYSTEM: “You are an expert in cleaning and formatting research papers.”
USER: “Your task is to clean the given text and format it for readability”
but when I’m using the fine-tuned model I’m using ,
SYSTEM: “You are an expert in cleaning and formatting research papers.”
USER: “Process the text below accurately. Retain all the original content, ensuring no summarizations or alterations are made.
The only exception is removing extraneous characters that do not contribute to the meaning or references within the text.
Take special note to maintain the integrity of all numbers and citations, as they are essential to the context.”
I’ve tried this and didn’t see any improvement in the responses. What I’m attempting to do with a different prompt is to perform the same task the model was trained for, but with additional instructions. The reason behind including additional instructions is to ensure that model responses are more consistent. (Model used for training: gpt-3.5-turbo-1106, and training loss: 0.0043. The only parameter I changed is the temperature, set to 0.2.)
Thank you