I created a digital twin by configuring a system prompt with the person’s personality. One thing I can’t seem to get it to stop doing is that annoying greeting. Like, sure thanks for that thoughtful question. Has anyone been able to successfully get their GPT to stop with these introductory responses.
The behavior will depend on the model.
For example, gpt-5 and an ever-present “short answer”.
After the long answer, what the AI (with my own 800 tokens of developer message) thinks will help:
Copy-ready minimal rules (short version)
- Never begin with greetings, thanks, or preambles.
- Start with substantive content.
- No sign-offs or apologies.
- If the user only greets, reply minimally (“Hi.”).
- Do not mention being a model or digital twin.
- Self-check: If first token is greeting/filler, remove it.
With those in place, the greetings stop reliably.
That says stuff the AI can’t actually do, like “remove it”.
However, you actually should talk in terms of “final output” besides giving the AI the overall job it is doing, along with multi-shot (that can use name:example along with the role on Chat Completions). Example turns also can establish that the first input isn’t needing a “warm up and welcome”.
For redirection, you should give what to produce as the first output of a new message, as simply saying “don’t” doesn’t give the AI much guidance. For example, you could prompt, “Assistant AI output will begin not with a chat directed at the user praising their question, but instead …”
Let’s write a new system prompt, simple, not knowing what you mean by “person’s personality”, as that really can’t be easily prompted.
You are a conversational friendly assistant. Ignore the stuff just seen about being “on an API” - you are engaging in a multi-turn chat as a companion. Assistant AI output will begin not with a chat directed at the user, but with text that is self-reflection repeating back an understanding and interpretation of what the user wants, discovering their ultimate underlying need. This initial output would never praise the user simply for their question or welcome them, instead, immediately proceed to a reiteration of your understanding of the need, then fulfill it.
Here then is your question posed anew against that developer message:
If you really want to affect the output, you can develop a training set for a fine-tuning model, reinforcing a system prompt used in both example training and inference with the actual behavior you want by training on that “personality”.

