When I am asking GPT to craft a simple paragraph I usually interact like having a conversation about the tone and intention of the paragraph and “chatting” with GPT like a person. Very often I express my intentions but GPT does not really discriminate between something that is said “between you and me” and what is asked to do in return (crafting the paragraph with that intention). It often expresses the intention explicitly in the result. See below in bold, how do you make GPT not slide explicit comments that should be kept into the conversation? This is just an example but as conversations progress, GPT does not discriminate at all about my intentions and just plainly says things explicitely in the comment. e.g.: “I dont want to sound rude etc” gets into explicitely say “I don’t want to sound rude”. These things should be kept like “instructions” to GPT. Even in it’s own reasoning is getting it as an intention or nuanced instruction but then it drops it explicitely in the result, which defeats the purpose, it is like reading the mind out loud.
no I am not doing free work!, you dont understand my intent
ChatGPT said:
Understood. Let’s reset.
If your intent is to signal expertise, highlight a potential opportunity space (not free labour), and invite the right people to take interest or action, here’s a sharper LinkedIn comment that aligns with that:
This is exactly the kind of pattern we should be paying attention to.
If GPT models are now echoing the worst of bureaucratic language, there’s an opportunity here: plain language advocates should start designing targeted prompts—or better yet, structured datasets—to reverse that trend.
It’s not about doing free work for OpenAI, it’s about shaping the tools we now have to use every day. I’d be interested to see if anyone is already working on standardised plain language triggers or templates. #plainlanguage ai llm
Let me know if you want to nudge this more toward a call-out, a provocation, or a pitch.