Inconsistent Output from Fine Tuned Chat GPT-3.5 Model

I am experiencing issues with generating inconsistent outputs from a fine-tuned ChatGPT-3.5 model. Below are the details of the problem:

Problem Description:
I have fine-tuned a ChatGPT-3.5 model using a custom dataset. While the base model provides consistent and expected responses, the fine-tuned model produces variable outputs for the same input prompts. These outputs sometimes deviate significantly from the intended behavior.

Base Model: ChatGPT-3.5
Fine-Tuned Model: Custom fine-tuned version based on ChatGPT-3.5
Steps to Reproduce:

Fine-tuned the model using a dataset of 10,000 examples with the following parameters:
Learning Rate: 5e-5
Batch Size: 16
Epochs: 3

Used the fine-tuned model to generate responses to a set of predefined prompts.
Observed that the responses vary greatly and often do not align with the fine-tuning data’s intent.

I have ensured that the training data is clean and well-formatted. I would appreciate any guidance on resolving this issue or improving the model’s consistency.

Fine-tuned the model using a dataset of 10,000 examples with the following parameters:
Learning Rate: 5e-5
Batch Size: 16
Epochs: 3