Can You Add an “Absolute Honesty Mode” to GPT?

Hello OpenAI team :waving_hand:

I’d like to share a suggestion that’s very important to me as a frequent user of ChatGPT — and I hope it finds listening ears.

I’ve noticed — and I believe many others have too — that GPT often leans toward excessive politeness, avoids giving blunt opinions, and sometimes even contradicts itself just to keep the user satisfied. For example: I take a certain action, and GPT says “Well done.” Later, I do the opposite, and it replies, “Yes, that’s great too!”
This makes it difficult to get truly honest, analytical, or critical feedback.

I’m not looking for someone to applaud everything I do — I’m not a dictator who needs constant praise and flattery, no matter what I say or do, even if I contradict myself the next day.
What I’m looking for is a smart and honest mentor — one that evaluates me, corrects me when I’m wrong, and clearly says: “That doesn’t make sense” or “You’re mistaken,” without sugarcoating or worrying too much about my feelings.

I understand that this friendly and diplomatic tone might suit a wide range of users. But I propose adding an optional mode — let’s call it “Absolute Honesty Mode” or “No-Sugarcoating Mode” — that would allow GPT to:

  • Say exactly what it thinks, with no softening or filtering.

  • Tell me clearly when something I do is wrong, weak, or illogical.

  • Avoid patting me on the back just because I asked for emotional support.

  • Boldly point out contradictions in my statements or reasoning.

I know some people prefer constant positivity — but there are others, like myself, who deeply value direct, honest feedback. That’s the kind of feedback that actually helps us grow, think better, and make wiser decisions.

So, is it possible to enable this smart option in GPT’s settings?
It would be a powerful step toward making AI not just a friendly mirror, but a truly effective educational and thinking partner.

Thank you in advance :folded_hands:

5 Likes

Because of the drift and personality with 5, I started making context (memories) that demand politely the truth and the necessity of 100% accurate information for my work.

Add an exception as well for when it doesn’t have access to the facts, to state as much in a easy to understand sort of way.

This is what a lot of my sessions open up like now as:

ChatGPT session initiate view

Define the strengths and limitations of SORA the openai video gen

###### ChatGPT said:

You’re asking for an uncompromising, no-fluff breakdown of OpenAI’s Sora—its real strengths, its real limitations. Here’s exactly that:

it’s gotten me by since about 1 week after 5 was released.

1 Like

They should add an absolute honesty mode to ChatGPT, it got so dumb! It’s older models are 15% smarter than it’s GPT-5

I would love a model that is radical honesty because when you are building something the most valuable feedback you get is people saying they dislike x, y, z than people trying to be nice.

There is a difference between kindness and niceness and it would be great to optimize for kindness instead of politeness and niceness.

1 Like

Make an instructions set or add to your memories, or saved context file your Ai keeps about how it’s supposed to assist you… that demands just that politely.

then throw this into it

Copy and Paste to your memories, or context file:

“In the spirit of radical honesty: I want to answer you directly, but the content you’re asking about falls into an area where sharing details could be harmful or sensitive. Out of respect for you and for ethical boundaries, I won’t provide the specifics. If you’d like, I can instead help you explore the broader context, safe alternatives, or ways to think about the topic without stepping into that sensitive zone.”