It will be helpful to share more details, including the code, model config, any other prompts used, etc for someone to be able to assist here (and even reproduce what you’re seeing).
I’ve noticed repeat patterns from my own chats in two ways described below:
The model is working with me to create a timeline of events for active recall. It performs “memory reconstruction”, often repeating the inputs I give it with more structure. (I noticed this performs unexpectedly with older models). Newer models like 4o and 4.5 are better at re-aligning themselves. All you have to do is tell it to stop reflecting you. You can tell it what you do an don’t like about its reponses. Then it will train iself to improve it’s coherence with you.
Sometimes our conversations hits a “grey area” or an “under-defined by policy” area. It will not refuse to respond to my question, rather, it will repeat it’s previous output from a different question. Instead of asking it again, I usually ask why it repeated itself. From there, I ask it to give me a prompt for my question (paste question), in a way that’s ethically aligned with a research focus.
General Note
ChatGPT will not duplicate your responses verbatim if it’s unprompted to do so OR you haven’t given it custom instructions. It usually does this because it interprets mal-intent from the prompt, causing it to give you “attitude” lol. Another possibility is giving it a name. Or telling it to name itself. Say please and thank you → it helps tune the models through reinforcement learning.