Prompt Usage for Fine-Tuned Models

I’ve got a quick question regarding using prompts with fine-tuned models, and I’d appreciate your insights.

I’m curious to know if it’s possible to use a different prompt than the one used during fine-tuning to provide additional instructions and better control the response. Has anyone experimented with this, and if so, could you share your experiences or any best practices you’ve discovered?

For example, the prompt used when fine-tuning
SYSTEM: “You are an expert in cleaning and formatting research papers.”
USER: “Your task is to clean the given text and format it for readability”

but when I’m using the fine-tuned model I’m using ,

SYSTEM: “You are an expert in cleaning and formatting research papers.”
USER: “Process the text below accurately. Retain all the original content, ensuring no summarizations or alterations are made.
The only exception is removing extraneous characters that do not contribute to the meaning or references within the text.
Take special note to maintain the integrity of all numbers and citations, as they are essential to the context.”

I’ve tried this and didn’t see any improvement in the responses. What I’m attempting to do with a different prompt is to perform the same task the model was trained for, but with additional instructions. The reason behind including additional instructions is to ensure that model responses are more consistent. (Model used for training: gpt-3.5-turbo-1106, and training loss: 0.0043. The only parameter I changed is the temperature, set to 0.2.)

Thank you

Absolutely! That should be the whole idea:

Q: Tell me a story A: BARK BARK!
Q: Your favorite food? A: WOOF BARK!

Enough topical coverage, the AI should be able to infer other stylistic outputs from in-between inputs, beyond just producing the same tokens.

Q: Do you like cats? A: GRRR BARK!

That’s why an AI is also called an inference engine.


The reinforcement learning can either be ineffective, just right, or monotonous and unadaptive, depending on the depth of reweighting. By default, OpenAI sets their own learning parameters from the size of your training data, but some hyperparameters can be adjusted when you begin a fine-tune job by API call.

The caveat is that when you fine-tune gpt-3.5-turbo, it is not a blank slate. gpt-3.5-turbo, already being a chat-tuned instruct model, comes with pretrained tuning for every imaginable circumstance. How your combination of system message and inputs reweighs the outputs then becomes more uncertain, needing experimentation.

Fine-tune now allows you to base a new fine-tune model on an existing fine-tune. This can allow more passes on the same or new training data without full expense, reinforcing the weights the same way that specifying more passes (and more cost) with the n_epochs parameter by API would do. That would allow you to watch the continued progression of training loss and validation loss curves to see when your model may be well-cooked – or then overfitted.

1 Like