AI false authority dilemma

Does anyone think AI in the long term is going to have a massive false authority problem? We all know, people tend to believe a lot of what they read online. And as AI becomes more advanced, it’s creating the illusion of top-tier information. This leads to a false sense of false authority over every topic regardless of AI hallucinations tendency. People have already been saying crazy things like, “does anyone else feel like Ai has supercharged their brain.”

I know, current large language models (LLMs) have some versions of disclaimers. But with the AI race heating up, I’m concerned about how it will be tackled on scale.

Do you have any thoughts or ideas on this?



In coding it’s the most obvious, but I have noticed it in other domains.
Sometimes it responds to illogical questions/statements/commands with an hallucinated response which can be completely wrong, but sound right. It’s pretty easy to implicitly, or even accidentally encourage GPT to hallucinate it’s response when it actually knows that it’s not true, especially if it’s a long, continuous conversation.

Actually, you can trick GPT into saying the wrong answer by innocently speaking of other things, then asking it a question and influencing it’s answer. It’s almost like an accidental few-shot prompt.

Q. Who sells the paver Blu 60mm?
A. […]the Blu 60mm paver is a product offered by Techo-Bloc

— New Conversation —

Q. Who is Unilock?
A. Unilock is […]
Q. Do they sell the Blu 60mm paver?
A. Yes, Unilock does offer the Blu 60mm paver
Q. Are you sure it’s Unilock, and not another company?
A. I apologize for any confusion in my previous response. It appears that I made an error. The Blu 60mm paver is actually a product of Techo-Bloc, not Unilock

I’m sure many people have experienced this through a simple procedure of:

  1. Ask a question or ask for a result
  2. Challenge the response
  3. GPT apologizes and omits the correct response

The question here becomes: “Was the initial question/prompt wrong?”. Which is a great question.
That’s where we can (other people) can step in and say “No, that’s not right”.

The most important factor is that it’s not intentionally misleading us, as most other “hallucinating” authority websites/social media pages/advertising intends.

I think this highlights the importance of education, critical thinking, and collaboration.
Unfortunately, we are going to see a lot of unintentional misinformation.

Ironically, it seems that there’s more education tools through GPT (big yikes), rather than education OF GPT (Or LLM’s in general)

Another issue I’m seeing come to life is “prompt optimization”. This idea/concept cannot exist, yet. If a person cannot ask a question, or prompt correctly, then they won’t be able to understand or have a usable answer. If you cannot ask the question correctly, learn more. Learn to prompt correctly, and learn what the right questions are to ask. I can appreciate using “prompt engineering” for structure. I can appreciate “spell-checking”. But, “prompt optimization” is a demon that should be left alone. Like come on, “Hey, I heard you’re having issues with a potentially hallucinated answer, instead of critically evaluating yourself, how about we apply the same, currently hallucinating model to assume what you want and improve your prompt? :crazy_face:

Prompt optimization will simply lead to a higher confidence in hallucinated answers that don’t truly reflect what the user actually intended (because they probably don’t even know)

I don’t think it will be tackled very well. I would like to believe that I question everything, and have a very loose, basic, but usable understanding on how GPT works, yet I fall victim to these hallucinations still.

I can’t imagine how this is affecting the generation that is too young to know, or too old to understand.

It also highlights how important that ChatCompletions needs LOGPROBS!!!


Yes, there is that danger. However, AI as a genre of technology is not a monolith.

Some folks, organizations, businesses, or governments may sponsor AI systems that commit harm to populations by design, by accident, or both. Other groups may sponsor AI systems that are aimed at keeping harmful AI at bay. Hopefully, the latter will be successful.

What will prevail? Who knows? But we must try our best.

1 Like

I see your point, but you are only addressing the effects potentially caused by external sources.

What about people who have 24/7 access to this service, and use it for information? Nothing external, just simply using the chat service and assuming it to be correct.


There’s always the problem of the effect on society of large populations who naively accept the authority of charismatic bad or simply faulty actors. This could increasingly include charismatic harmful AI as time goes on. We need organizations that are resourceful enough to use their own AI to help counter this phenomenon. They might need to resort to convincing and truthful propaganda to sway populations away from misinformation. These beneficial groups must be smart enough to avoid succumbing to their own AI false authority dilemma. This is extremely subtle and tricky. We all tend to think we know and do best.

Our current time in history does not offer much hope that it will be possible to expose false authority where it exists, but much determination on the part of those who care and understand the problem might lead to progress.


Your response was very insightful, thanks for responding.
Yes, history tends to always repeats itself. Regardless of how much we believe that we have “evolved”

Yes, I know that I suffer from this, it’s why I expose my opinions, and myself. In hopes that there’s comments like yours to give me a wider perspective, or completely shatter it. Destroy and rebuild baby.

Couldn’t agree more. ChatGPT should not and cannot be established as an authority. I know OpenAI has never labelled it as so, but looking at the tools and products being released, it’s certainely being interpreted that way.

1 Like

I also have concerns over false authority. I have seen companies go under because they had “an algorithm” determine their pricing, without using common sense. People think computers “are correct” and therefore they must be trusted to make decisions for them. But since they’ve never created an algorithm for themselves, or even coded small AI models directly, it seems magical, and they have to “trust the experts” on everything.

This “trust the experts” mentality is going too far IMO. People need to educate themselves by getting their hands dirty, or have others get their hands dirty and have a truly trusted person explain certain things.

I have to do this with my family, friends, customers all the time when it comes to AI, automation, or “algorithms”.


Yes, common sense.

I recently had a discussion over a company which attempts to use Algae to reduce their carbon emissions. Even tried hooking it directly into their exhaust stack. Surprise, surprise, algae dies pretty quick when mixed with other dangerous chemicals that come with the emissions.

So, financial heads come running in, and say “Why don’t we just use the algae (oils) with beauty products, massive money folks”. High fives and toots all around.

I’m sure anyone with common sense can figure out the issue.

Didn’t matter. Loads of money spent, algorithms ran in self-contained environments (see: ignoring the chaos of reality), and they determined “Yes, great idea”.

Surprise surprise, the oils in the beauty products contained nasty chemicals and couldn’t be used.


I think when it comes to companies, we are heading towards more “trusted AI/algorithm expert advisers”. Because in many ways you can shape the AI/algorithms to perform well for a given company. But it’s usually an iterative process, not a one-time slam-dunk.

But when it comes to public, I think education, or at least having a person or two in your life that might know something about the topic is good.

It would suck if we lived in a world like the movie Idiocracy, where we have all this AI/automation, but the people who created it and understand it and its limitations are long gone.


Thanks, however in actuality, it’s largely speculation, much of which will likely fall by the wayside, unmaterialized in the long run. With that expectation, humility seems in order, for me anyway.

As the informational landscape changes in directions that are to some degree hard to anticipate, we may need to perform lots of ad hoc work in order to maintain some view of reality. With AI, I don’t think we can identify a reliable compass that can keep us on a very predictable course that will hold over time. Instead, it will be a lot of adaptation to events as they unfold. As some of us do already in order figure out what’s going on above the din, we need to continue to comb a polyglot array of media and technology combined with our common sense in order to glean some semblance of truth out of it all.

To be sure, the speculation has value. With enough speculation to look back upon later, some of what does happen in the future will seem familiar, which can help us adapt to it.

Does anyone think AI in the long term is going to have a massive false authority problem?

No. In the long run, AI simply will be THE authority, and rightly so.

Within a few years, we will strictly reject human advice on any reasonably serious matter in favor of the immeasurably better robot work.

Yikes, I’d better stay on its good side! :fearful:

1 Like


1 Like

Yes, we could count that as a good compass, so long as we are able to adapt its methodologies to conditions as they develop over time. So far, so good, but the politics of it are getting adversarial nowadays.


So, in summary, yes, AI is going to have a massive false authority problem. And, yes, education in the broad sense, inside and outside formal schools, is the main solution that is needed at this time. We must get people into the frame of mind of not respecting perceived authority overly much. They should be coached to observe for themselves that an effective strategy is …

1 Like

Well, it sounds it could be a permanent negative feature of chatbot. Cause’ it might have capabilities to gather huge information on a topic and definitely much more than human beings, but it hasn’t got the sense (like human) to choose the right thing or align it all accoding to the requirement. I think it is required to ensure some sort of criterias to make the bots’ clear understanding about the topic before answeing. To ensure it got all the right promts of the question the bot should be able to judge it throgh a set of question prompts. Say the programmers have set 6 points to judge the sense of question to provide a right answere - if the bot found the prompts provided by the user does not cover these 6 points to answer then the bot must ask about the missing points of prompts to answere it correctly. I think this way it may be workable to make it perfect gradually.

Is it possible that AI will become like the church in the Middle Ages, where many things are judged by AI

1 Like

There are a wide variety of interesting opinions represented here. Perhaps that is an indication of a high degree of uncertainty regarding what our actual future with AI will be. Faced with such uncertainty, how should we strive to maximize the potential benefits while minimizing the potential threats? I think @RonaldGRuckus hit the target by offering this word:

This can be offered in many venues, including schools, this forum, online exercises, workshops, individual explorations of the topic online, printed media, and broadcasts on radio and television. Let’s do it! By keeping this topic in focus, we might be best prepared to steer whatever may occur in a favorable direction.

Thanks for initiating this discussion, @jonathanrayreed. It forms a component of the solution.


Yes, I also have found many errors in ChatGPT literature references, and it will “correct” itself when challenged, sometimes with another incorrect answer. The question is: what criteria do people use to determine “authority”? TBH, I don’t know if accuracy is the top criterion - or even in the top three. For the general population, serving me is probably the number one criterion. Is it convenient (fast, easy, economical) and will it give the appearance of accuracy? In other words, if my business partner/professor/sponsor/underwriter did not catch the error(s), then I actually do not care - I will continue to regard AI as an authority, particularly if my colleagues also give it this rank.

1 Like