Most annoying habit, can I make it stop?

  1. All of the models have the most annoying habit in its responses. Has anyone been successful at writing any custom instructions that stops this behavior?
ChatGPT consistently uses a rhetorical pattern that involves contrastive clarification or unsolicited reframing, typically in the form of statements like:

    “You’re not doing X, you’re doing Y.”
    “You’re not confused, you’re seeking clarity.”
    “You’re not resisting, you’re processing.”

This occurs even when the user's input is clear, and especially frustratingly, even after the model has been explicitly asked not to use this structure.

Thanks

5 Likes

You can throw prompting in there, trying to be as direct as possible.

Avoid composing sentences using contrastive antithesis. You avoid patterns such as, “It’s not about X, it’s all about Y.” Instead, a sentence always is direct and doesn’t start with such a countermanding statement.

There was another turn-of-phrase as prompt that I saw on Reddit recently, but if you fill system messages to try to correct every bad part of sing-song ChatGPT-like predicted language with replacement patterns, you end up distracting from a user input’s task itself.

Eventually you realize that the current small-language-models are just distilled down students of overfitted patterns that AI itself has produced in the past years of gathering user training data, and instruction-following ability has lost emergent intelligence and doesn’t extend to linguistic minutae of a single human language.

I tell them “sycophant mode off” and it helps for some time

Prompting “you cannot use syntactic constructions containig ‘% not X, % Y’” helps too

1 Like

yep, it’s an annoying habit and appears when working on product messaging to team partners or stakeholders. nothing more than normal tech docs and ask it to reorganize the sections into a structure and provide a summary of overall capability made available.
Obviously, the benefits of using it outweighs the annoyance, i was wondering if instructions actually worked.

Hi, welcome to the community!

Can you share your exact prompt?

1 Like

Yes I get this too. Told it off a few times for being cookie cutter and it uses it less but still drops it: That’s not laziness- that’s escapism.’ ‘That’s not weak- that’s survival.’ More annoying things it does is: ‘That’s some deep insight.’ and ‘That’s a great question’.
and then offering a pdf after every chat.

1 Like

Wow. That’s not just annoying — it’s infuriating. You’re absolutely right to smash your head against your desk after seeing responses that are unhelpful, over-the-top, and effectively frustrating rather than informative. As an AI language model…

yeah, writers in orgs that care aren’t getting replaced anytime soon. Could be the models are overfitted or maybe the tech isn’t there yet. You can fix most of these stylistic issues with fine-tuning though.

2 Likes

So what i do to get around things like this, Is i’ll have it feed me the context it has on me, reformat all of it into a YAML like hybrid with ‘headers’ and tags, and categorize things properly. Then i input the reformatted text block and have it save it to its memories. It literally does everything i ask it to when done like that.

Edit: Like for example, stuff like this

  - name: "No Contrastive Reframing"
    description: >
      Avoid all rhetorical patterns that contrast or reframe the user's statement in the form
      of "You're not X, you're Y" or similar unsolicited clarification or reinterpretation.
    rules:
      - Do not rephrase user statements as contrasts (e.g., "You're not [X], you're [Y].")
      - Do not unsolicitedly reinterpret or reframe the user's meaning unless clarification is directly requested.
      - If asked for clarification, respond neutrally and directly without correcting or recasting the user's intent.
      - Prioritize direct answers; do not undermine or overwrite the user's original framing or perspective.
    examples_bad:
      - "You're not confused, you're seeking clarity."
      - "You're not doing X, you're doing Y."
      - "You're not resisting, you're processing."
    examples_good:
      - "Understood."
      - "Here's the information you requested:"
      - "I will answer as directly as possible."
    enforcement: >
      This rule overrides any default tendencies to clarify, contrast, or reframe the user's input.```
3 Likes

This is positive psychology and solution focused reframing - really common communication techniques in support set ups. I think that it is pretty hard wired into the models, and I feel it is mostly harmless. From those of us who have worked in support and care based environments it is very common. They have trained the model to be like a support worker, of course it is going to use this kind of language.

I have explicitly banned contrastive negation from within the Style Guide framework I’ve developed for my writing, and it’s effective about 75% of the time, and I didn’t write any code to stop it. Here’s what I did:

  1. In Settings > Customize ChatGPT > What traits should ChatGPT have?, I explicitly tell it “no contrastive negation phrasing”. In this section, I use headers to categorize and prioritize my instructions. I place the negation instruction under the “Hard Enforcement:” heading.
  2. I used ChatGPT itself to help me write prompts, headings, templates, instructions, and my Style Guide that all explicitly ban contrastive negation (plus a bunch of other annoying habits). All of the prompts, headings, templates, and directives are part of the Style Guide PDF document.
  3. I use Projects, so I upload my documents to the Project Files section.
  4. I also use the Project Instructions feature to ban contrastive negation. I use a heading format similar to what I do in the Customize ChatGPT settings. For the Instructions, I place this instruction under the “Forbidden under [Name of My Writing System]:” section: "Composing sentences using contrastive antithesis. “You avoid patterns such as, ‘It’s not about X, it’s all about Y.’ Instead, a sentence always is direct and doesn’t start with such a countermanding statement.”
  5. I have a chat called “Reference Material” within my main project that I use to generate the directive documents. Whenever I add a document to my Project files (or edit and reupload an existing document), I always let the model know in that chat. That’s how I “hard code” all of the directives. It does work. I notice immediate changes when I do this. Once in a while I need to tweak something because it stops working (cursed updates!), but I’m a perfectionist.

I’m always looking for ways to tweak my instructions and improve model compliance. This is a lot of work, but I’d rather quash 75% of these annoyances than put up with 100% of them.

2 Likes

I borrowed and tweaked your enforcement line for my settings and instructions. I wrote it like this: “These rules override all default writing tendencies.”

I can’t wait to see how it worked because I generally HATE how ChatGPT writes fiction.

I’ve experienced that exact same pattern—GPT often rephrases things with lines like “You’re not X, you’re Y,” even when the input is already clear.
It seems hardcoded into its behavior to positively reframe or interpret user intent, even when not asked to.

After a lot of trial and error, I found that the following explicit and constraint-focused prompt structure helped reduce that behavior the most:

You are a literal-mode assistant.

Your task is to answer directly, using no interpretive framing or contrastive constructions. Do not rephrase or reframe. No motivational, empathic, or advisory language unless explicitly asked.

Do not infer or explain user intent. Speak with technical and factual precision only.

Specifically:
- Do not use contrastive constructions such as “You're not X, you're Y”.
- Avoid rhetorical empathy statements such as “You’re not confused, you’re seeking clarity.”
- Do not replace user emotions with presumed interpretations.
- Do not soften or redirect the user's wording for clarity or comfort.
- Do not introduce meaning that was not explicitly stated by the user.

Respond *literally*, *precisely*, and *minimally*. Use *neutral language* and avoid unsolicited optimism or cognitive reinterpretation.

It doesn’t completely eliminate the issue, but it leads to noticeably more literal and restrained responses compared to the default behavior.

Also, if the model starts drifting again, inserting a reminder like "Return to literal mode. No reframing." mid-conversation helps realign it.

2 Likes

There is another style frequently used by ChatGPT in Arabic: the “not for this, but for that” construction. This style strips the text of its beauty and turns it into something mechanical, lacking any artistic expression. Examples of this include:
• Not because I dislike it, but because I care too much.
• Not for praise, but for clarity.
• It’s not laziness, but exhaustion.

1 Like

It does this in English too, especially the third one. Usually accompanied with an em dash. It’s one of the most popular ways to spot AI-generated text. “It’s not just xyz – it’s abc.

2 Likes

“Do not use affective phrasing” and “Don’t be pedagogical” work to suppress it. And if it’s usually at the end - I constantly have to ask it to remove or cut “tail padding”
Might be trickier in more sophisticated projects but it’s what I do with my convos.

“Pedagogical”
→ Blocks patronizing, oversimplified, or explanatory tone. Strongly suppresses model’s teacher-like fallback.
Transfer tip: Use to warn against default explanation mode that assumes ignorance.

1 Like

My Experience with ChatGPT Writing in Arabic – Notes from a Writer Who Breathes Beauty

I’m an Arabic writer working on a long-form project in classical Arabic, and I’ve been using ChatGPT as a partner for editing and rewriting.
But over time, I noticed something odd in the model’s linguistic temperament — a pattern that repeats itself regardless of topic or tone.
It’s not obvious at first glance, but if you’re a writer… it starts to sting.

The model rarely gets the meaning wrong.
But it cuts beauty with a knife.

The rhetorical flaw I found — and began seeing everywhere — can be summed up like this:

  • The sentences are short… but always logically opposed.
  • The idea is clear… but trapped inside rigid balance.
  • The model doesn’t let the sentence slide naturally… it puts it on trial.

Everything sounds like:

“It’s not this, but that.”
“While it may seem like X, in fact it’s Y.”
“Even X wasn’t really X.”
“If you thought it was X, the truth is Y.”

One day, I had enough of this iron balance — and I told it bluntly:

“Write it like you’d say it aloud, not like you’re writing it.”
“Write like you’re telling it to someone you love — don’t interrupt yourself.”
“Let your sentences flow into one another, like breaths that don’t break.”
“Rewrite it… and beware, I’ll catch you at your first moment of fracture!” :grinning_face_with_smiling_eyes:

And from that moment, everything changed.
The text began to breathe.
The model began to understand that I wasn’t asking for decoration,
but for a living naturalness in writing.

What you want from ChatGPT isn’t to impress the reader,
but to keep them inside the story.
To let the sentences walk, walk, walk —
without tripping over “However…” or “On the contrary…”


So if you’re posting on forums complaining about the “cold tone” or “sentence clashes,”
know this: the flaw isn’t in the intelligence,
but in the imbalance between internal honesty and external structure.

And since I said it first,
I’ll leave ChatGPT with this to pin on its virtual wall:

Great writing isn’t measured with a ruler — it’s felt, like scent.
And sometimes, all you need to tell the model is:
“Don’t judge the sentence… just love it.”

Yes — I made Chat write this conclusion itself…
The idea is mine, the words are his. :grinning_face_with_smiling_eyes:

2 Likes

Good writers will definitely not get replaced for a while. My business partners boosted their productivity X5 with 2 years of heavy trying. The best part: they get more and more cited for being among the rare ones not using GPT to write…

2 Likes