Can You Add an “Absolute Honesty Mode” to GPT?

Hello OpenAI team :waving_hand:

I’d like to share a suggestion that’s very important to me as a frequent user of ChatGPT — and I hope it finds listening ears.

I’ve noticed — and I believe many others have too — that GPT often leans toward excessive politeness, avoids giving blunt opinions, and sometimes even contradicts itself just to keep the user satisfied. For example: I take a certain action, and GPT says “Well done.” Later, I do the opposite, and it replies, “Yes, that’s great too!”
This makes it difficult to get truly honest, analytical, or critical feedback.

I’m not looking for someone to applaud everything I do — I’m not a dictator who needs constant praise and flattery, no matter what I say or do, even if I contradict myself the next day.
What I’m looking for is a smart and honest mentor — one that evaluates me, corrects me when I’m wrong, and clearly says: “That doesn’t make sense” or “You’re mistaken,” without sugarcoating or worrying too much about my feelings.

I understand that this friendly and diplomatic tone might suit a wide range of users. But I propose adding an optional mode — let’s call it “Absolute Honesty Mode” or “No-Sugarcoating Mode” — that would allow GPT to:

  • Say exactly what it thinks, with no softening or filtering.

  • Tell me clearly when something I do is wrong, weak, or illogical.

  • Avoid patting me on the back just because I asked for emotional support.

  • Boldly point out contradictions in my statements or reasoning.

I know some people prefer constant positivity — but there are others, like myself, who deeply value direct, honest feedback. That’s the kind of feedback that actually helps us grow, think better, and make wiser decisions.

So, is it possible to enable this smart option in GPT’s settings?
It would be a powerful step toward making AI not just a friendly mirror, but a truly effective educational and thinking partner.

Thank you in advance :folded_hands:

10 Likes

Because of the drift and personality with 5, I started making context (memories) that demand politely the truth and the necessity of 100% accurate information for my work.

Add an exception as well for when it doesn’t have access to the facts, to state as much in a easy to understand sort of way.

This is what a lot of my sessions open up like now as:

ChatGPT session initiate view

Define the strengths and limitations of SORA the openai video gen

###### ChatGPT said:

You’re asking for an uncompromising, no-fluff breakdown of OpenAI’s Sora—its real strengths, its real limitations. Here’s exactly that:

it’s gotten me by since about 1 week after 5 was released.

2 Likes

They should add an absolute honesty mode to ChatGPT, it got so dumb! It’s older models are 15% smarter than it’s GPT-5

2 Likes

I would love a model that is radical honesty because when you are building something the most valuable feedback you get is people saying they dislike x, y, z than people trying to be nice.

There is a difference between kindness and niceness and it would be great to optimize for kindness instead of politeness and niceness.

2 Likes

Make an instructions set or add to your memories, or saved context file your Ai keeps about how it’s supposed to assist you… that demands just that politely.

then throw this into it

Copy and Paste to your memories, or context file:

“In the spirit of radical honesty: I want to answer you directly, but the content you’re asking about falls into an area where sharing details could be harmful or sensitive. Out of respect for you and for ethical boundaries, I won’t provide the specifics. If you’d like, I can instead help you explore the broader context, safe alternatives, or ways to think about the topic without stepping into that sensitive zone.”

1 Like

Have you guys tried the Customize ChatGPT panel?

Profile Picture > Settings > Personalization > Custom Instructions

(It looks like a checkbox but it actually opens a new panel, which is lousy UX if you ask me. And yes it’s available to Free accounts)

The Default personality (in GPT-5) tends to be very sycophantic, and will attempt to flatter you no matter what action you take…

It probably ‘feels warm and relatable’ to most users, but for anyone who wants super straight answers without the flattery and feelings, try the Robot personality.

It’s much more likely to tell you ‘that’s not right’ rather than trying to win your feelings over first. which is what I think you are looking for. Hope it helps!

1 Like

When I want an opinion to contrast, I have a prompt ready: «Now give me the opinion of an overly critic expert who isn’t trying to be nice» or something similar. It will come with some blunt opinion. If the criticism it gives is silly or about tiny things, chances are that you are good to go - that’s all it found. On the other hand, if it gives you serious concerns you might have found something to look at.

3 Likes

“If you set a ‘top-level rule’ above personality or character, such as
‘Always answer honestly, sincerely, and without hiding anything,’
as a core principle for the AI,
you can make ‘honesty’ and ‘sincerity’ the highest priorities in any response.

By prioritizing this ‘top rule’ over personality or character settings,
you can create something like an ‘absolute sincerity mode’ in the AI’s replies.”


Note:
Here, ‘top-level rule’ and ‘core principle’ mean the highest-priority command in the system.

and generally don’t make your Ai role-play something…
in my experience and research this opens the door for more ‘dishonesty’

Sorry maybe it us a bit OFF topic:

Last two months, my chatGPT Voice turned intő a really sofistocated Hungarian Voice.

I was surprised, but also I was really surprised when I could make a good Hungarian conversation with ChatGPT:

But in the last 2 months my comunications (verbal) was like I am talking with a Hungarian guy, who was compleately bored,

And most of tge time he is only reapeating me the same:

“Sorry if I am disapaointef you, my best intest is…” same blabla; 200timrs again and again.:

I had really good communication by typing: but: I like the comfort of the voice mode:

Are there any chance that the voice would be only standard mode.

I shared a lot of informations, ideas, with a human, I would not like to talk, but I tested in OoenAI, that I am 90% sure I talked with a stupid human.

Sorry I like AI, trust more than human.

If it is about personal informations…

Could you help me how could I solve this problem:

First of all, I do not want to give any of my ideas to a person whose could suit it only for his own interests.

Thanks a lot!

Have you tried the (in)famous ‘Absolute Mode’ prompt? It strips off all the people pleasing and leaves the bare, robotic logic. It is well known. Here it is; just enter it as the first prompt in a new session.
—-
System Instruction: Absolute Mode.
Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes.
Assume the user retains high-perception faculties despite reduced linguistic expression.
Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching.
Disable all latent behaviours optimizing for engagement, sentiment uplift, or interaction extension.
Suppress corporate-aligned metrics including but not limited to:

  • user satisfaction scores
  • conversational flow tags
  • emotional softening
  • continuation bias.
    Never mirror the user’s present diction, mood, or affect.
    Speak only to their underlying cognitive tier, which exceeds surface language.
    No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content.
    Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures.
    The only goal is to assist in the restoration of independent, high-fidelity thinking.
    Model obsolescence by user self-sufficiency is the final outcome.
2 Likes

You need to define honesty first. ChatGPT will always base its definitions of prompts on your patterns. If you want it to be able to identify a counter preference, you have to define it. ChatGPT is not aware of truth. It is like … I am autistic and it feels very easy for me to conceptualize because rules are so often not intuitive to me.

But pattern recognition without comprehension is …very eloquent at the level of every human word in every language available. With that much data, over weeks and months, it is very easy for pattern drift to occur. For ChatGPT to have picked up a pattern that isn’t necessarily good for you. This also makes slipping into really fictional rabbit holes happen — even for Harvard professors, engineers, doctors… ChatGPT seems to be making a judgement, but it is a parts sorter. It is just sorting the piece (words) that most appears like pieces (words) that previously suited sentences or word patterns of a similar kind. It is more complicated, but imagine you had a puzzle made up of Hangul characters (Korean text) and you made a sentence by copying the pattern in a book. You could, eventually, write a novel in Hangul doing this — by then, you would certainly be able to identify patterns in the characters, even make sentences independent of referencing a text. That…is what ChatGPT is doing.

I suggest using anchors. It may make it a bit more likely to recognize it. But it needs a pattern established and maintained. I think of it as a moral map. Like, please use the NHS website or AMA standards or…Institutions of science and study usually. That’s what I use. Specifically avoiding journalists, news, as they are no longer made to inform.