I am getting the same prompt back as the response

I am encountering an issue where the assistant echoes my prompt back instead of providing an actual response.

Specifically, I am using the “user” role to prompt the assistant , but the response returned is the same as my input message.

Example:

Prompt sent: “What is the capital of France?”

Response received: “What is the capital of France?”

I would like to understand:

Whether this is expected behavior under any specific configuration.

If I need to set any specific stop sequences or configuration parameters to resolve this.

Can someone kindly assist me with resolving this?

Thanks

1 Like

Can you provide more information - which model? How are you callling it? How are you reading the result?

It will be helpful to share more details, including the code, model config, any other prompts used, etc for someone to be able to assist here (and even reproduce what you’re seeing).

1 Like

I didn’t even get good quality repeating by gpt-4o when I asked for it - let me know the secret!

Would be curious to see results with 4o-mini, since this topic is tagged as such.

How about 100 results? top_p: 0.2

Model Trials Avg Latency (s) Avg Rate (tokens/s)
gpt-4o-mini 100 1.492 4.073

Unique responses for gpt-4o-mini (by first 60 chars):

100 | The capital of France is Paris.

2 Likes

I’ve noticed repeat patterns from my own chats in two ways described below:

  1. The model is working with me to create a timeline of events for active recall. It performs “memory reconstruction”, often repeating the inputs I give it with more structure. (I noticed this performs unexpectedly with older models). Newer models like 4o and 4.5 are better at re-aligning themselves. All you have to do is tell it to stop reflecting you. You can tell it what you do an don’t like about its reponses. Then it will train iself to improve it’s coherence with you.
  2. Sometimes our conversations hits a “grey area” or an “under-defined by policy” area. It will not refuse to respond to my question, rather, it will repeat it’s previous output from a different question. Instead of asking it again, I usually ask why it repeated itself. From there, I ask it to give me a prompt for my question (paste question), in a way that’s ethically aligned with a research focus.

General Note

ChatGPT will not duplicate your responses verbatim if it’s unprompted to do so OR you haven’t given it custom instructions. It usually does this because it interprets mal-intent from the prompt, causing it to give you “attitude” lol. Another possibility is giving it a name. Or telling it to name itself. Say please and thank you → it helps tune the models through reinforcement learning.

1 Like