Finetuned GPT-4o-mini model seems to remember previous prompts

Hey there, I finetuned a gpt-4o-mini-2024-07-18 model on my custom dataset.

I’ve been running inferences for a while, and it seems to retain information from previous inputs, adding content seemingly from it’s memory to new prompts which I never mentioned.

For example:

  1. First prompt was about the rewriting information about Coronary Artery Bypass Graft (CABG)

  2. And even though the next few prompts were unrelated to CABG, it mentioned an entire sentence about CABG like:
    Coronary artery bypass grafting, also referred to as bypass surgery, is performed to improve blood flow to the heart. Here are some advantages of this surgical procedure:

Note: Double-checked and the repeated information was not even something it ‘over-fitted’ from the training dataset, it just seems to remember it.

Is this normal, how can I fix this? I’ve only experienced this today in different scenarios though I’ve been using the model for quite some time now but it’s super weird.