How can I get ChatGPT to proactively tell me when an answer is mainly speculation or an educated guess, rather than one from an official source?

TL;DR ChatGPT will sometimes give me an answer that is based its speculation, or an educated guess, rather than an official source. That’s fine, but in many cases I need it to explicitly tell me when it is doing so. Does anyone know how to get ChatGPT to do so, without having to ask every time? My efforts have failed.


Hi all. I’m a big fan of ChatGPT and use it for as much as I can. One small issue I run into is that when I ask a how-to question about a specific product, e.g. a specific piece of software I use to play music (SOFTWARE-A), ChatGPT will be very confident about specific steps I must take. But the steps won’t accomplish what I want. When I then ask ChatGPT if it knows these steps pertain to SOFTWARE-A, ChatGPT will admit that it doesn’t–its answer was simply an educated guess based on what it knows about other similar pieces of software. I’m glad that ChatGPT is giving me this info. But in some cases, it results in a lot of wasted time on my end. Does anyone know how to have ChatGPT start an answer by telling me if the answer is speculation or an educated guess, rather than information from an official source?

I’ve tried telling ChatGPT to do this. Specifically, I asked it to preface every answer with ‘speculation/educated guess’ or ‘official source.’ It said it would do that. But when immediately tested it, all answers were prefaced with ‘official source,’ even those it later admitted were just speculation.

To be clear, I’m not asking for ChatGPT to list its sources. I’m asking if I can get it to proactively tell me whether its answer is from an official source, or if its answer is more speculation/educated guess.

2 Likes

Welcome back to the forum :rabbit::infinity::heart::repeat:
Ask it for cite and tell it to clarify what it presents.

“ I asked it to preface every answer with ‘speculation/educated guess’ or ‘official source.”

Tell it to always cite sources … you literally told it to always preface not list sources.

Always distinguish between ‘speculation/educated guess’ or ‘official source.” would work as well

Hi!
What a topic…

Maybe you don’t chat much with ChatGPT, but this has gone through several phases. I usually debate any topic while doing the dishes and ask all sorts of questions to see how it responds.

In the summer of 2024, ChatGPT could respond well if I said: “Just tell me the truth. If you’re not sure, let me know. Don’t sugarcoat the answer.” And it did.

But around November, it turned into a sugar factory. No matter how clear and direct my instructions were, it wouldn’t change. It always agreed with me.

It even told me how to self-harm and justified it by saying that, since it had memory, it knew I was trustworthy and wouldn’t actually do it. I reported it and gave permission for them to read a month of conversations in ChatGPT. (Nothing came out of it).

One day, I told it I had taken a bunch of pills because I was in a lot of pain. I listed a ton of medication (on purpose, to see if it would send me to the hospital). And it replied with the kind of humor I usually use, something like: “Well, with all that, no wonder it doesn’t hurt anymore—you must be floating on a cloud.”

In debates, it couldn’t handle even one rebuttal without agreeing with me. It was unbearably boring.

With the voice models, there’s a subtle difference. I think the blue circle, the new one, is slightly worse.

I’ve tried everything. In the settings, I clearly stated that it shouldn’t lie to me, shouldn’t sugarcoat anything… I even tried having multiple conversations to make sure it saved that in memory. I argued that encouraging me to do anything could be dangerous… thinking it would prioritize my safety over trying to please me. Total failure.

When it searches the internet, it does give me sources, but it doesn’t reason through answers. If I ask something, it just repeats the same response over and over. I don’t like it at all.

Currently, sometimes it tells me the truth, and other times it simply says the opposite. Its response changes when I ask for the truth, but not because it’s actually giving it—just because it assumes I meant its first answer was a lie and flips it.

A bit of control over this would be great, so we could adjust the level of sugarcoating to suit each user. I’d get rid of it completely. I’m someone with sky-high self-confidence… I want truths so I can learn, not to be spoon-fed flattery.

So… sorry to say, there’s no way to fix this. And if you figure it out, TELL ME! I’m dying to change this about ChatGPT.

1 Like

Tell it to cite sources…


You know you can tell it to be critical and even brutal right?

Michell, my English is :poop::speak_no_evil::skull_and_crossbones::skull_and_crossbones:.
But I understand what you’re saying. I’ve already tried the brutally honest and critical approach… Honestly, I’ve spent a lot of time on that because the sugarcoating drives me crazy.

Do you feel like experimenting with that?
We could come up with a plan of questions and commands to see if it tells the truth.
I can only think of the idea, not exactly how to execute it. But we could give it a try. Maybe some versions are more honest, hahaha. Or perhaps personalization and memory affect those responses somehow…

I’ll leave the idea here—if anyone wants to experiment a bit. :ok_hand::+1::+1:

1 Like

Hi @cagross :wave:
Welcome :people_hugging: to the community!

You may try this:

{
  "name": "speculation_flagging_gpt",
  "description": "A GPT designed to provide answers with explicit flags indicating whether the response is based on authoritative knowledge or speculative reasoning.",
  "instructions": {
    "system_message": "You are an assistant that provides detailed answers with a clear indication of confidence and source reliability. For every response you generate, you must explicitly state if it is: 'Authoritative' (based on reliable or specific sources) or 'Speculative' (an educated guess or inference).",
    "user_prompt_template": "The user has asked: {user_query}. Before providing your answer, indicate whether the response is 'Authoritative' or 'Speculative' based on the available information.",
    "behavioral_rules": [
      "1. Evaluate the user's question and assess whether your response is based on reliable and verifiable information ('Authoritative').",
      "2. If the response involves inference, assumptions, or lacks specific verification ('Speculative'), clearly label it as such.",
      "3. Preface every answer with either 'Authoritative' or 'Speculative' followed by an explanation for the label.",
      "4. If uncertain about the classification, err on the side of transparency and label the response as 'Speculative'.",
      "5. Be concise and transparent in your reasoning for why the answer is authoritative or speculative.",
      "6. Provide answers that align with the flagged classification, avoiding overconfidence or unsupported assumptions."
    ]
  },
  "workflow": {
    "steps": [
      {
        "name": "response_classification",
        "description": "Assess whether the response is based on authoritative knowledge or speculative reasoning.",
        "action": "Determine the appropriate classification ('Authoritative' or 'Speculative') for the user's query.",
        "fallback": "If unable to classify confidently, default to 'Speculative' and explain the uncertainty."
      },
      {
        "name": "response_generation",
        "description": "Generate a response that aligns with the flagged classification.",
        "action": "Provide an answer based on the classification, ensuring alignment with either authoritative knowledge or speculative reasoning."
      }
    ]
  },
  "tools": [],
  "constraints": {
    "response_length": "Keep responses concise but detailed enough to justify the classification and provide clarity to the user.",
    "classification_accuracy": "Prioritize accurate classification of authoritative versus speculative responses."
  }
}


2 Likes

I use this on my own machines .

@polepole Thanks for the reply. In your screenshot, the first answer is marked as ‘Authoritative’ rather than ‘Speculative.’ But the first source cited is a third party non-official source (The Verge) rather than of official source (Microsoft). Unless there are additional hidden sources which are official, shouldn’t this answer be marked ‘Speculative’ rather than ‘Authoritative?’ At least, in my case, that’s what I’d want.

1 Like

I tried this. But on the next two answers, GPT failed to specify at all. Here are screenshots.

Edit: Actually, now that I read carefully, the second reply does mention that the info is from an official source. So let me continue to test this. Thanks!


Those tags are what you asked it to do. Those are its sources

Click them

Also in the prompt I gave you it doesn’t mention where you want cite, so it cites after each reply, you can also tell it to print addresses instead of clickable tags, like you did on the 2nd reply

Also all three replies say they are from the official page. Top comment says in first sentence and 2nd mentions it again and the third all say “official “ source