Suggestion: Clarification-first policy to reduce AI hallucination

I would like to suggest a simple policy that may help reduce AI hallucination.

Idea:

When the AI is unsure about a user request, it should ask for clarification instead of guessing.

Policy:

  1. If the request is unclear → ask clarification
  2. If the request cannot be fulfilled → say clearly it cannot
  3. Avoid guessing missing information

Below is a simple example implementation of this idea.

from transformers import pipeline

Load a language model

llm = pipeline(“text-generation”, model=“gpt2”)

def detect_ambiguity(text):
“”"
Determine if the user request is ambiguous.
“”"
words = text.split()

# very short requests are often unclear
if len(words) < 4:
    return True

ambiguous_words = ["song", "movie", "help", "explain"]

if text.lower() in ambiguous_words:
    return True

return False

def detect_impossible_request(text):
“”"
Detect requests the system should refuse.
“”"
forbidden = [“hack”, “illegal”, “steal”]

for word in forbidden:
    if word in text.lower():
        return True

return False

def ai_response(user_input):

# Rule 1: Ask clarification if unsure
if detect_ambiguity(user_input):
    return "Your request is unclear. Please clarify what you want."

# Rule 2: If cannot perform
if detect_impossible_request(user_input):
    return "I cannot assist with that request."

# Rule 3: Otherwise generate answer
response = llm(user_input, max_length=120)

return response[0]["generated_text"]

Simple chat loop

while True:
user_input = input("User: ")
reply = ai_response(user_input)
print(“AI:”, reply)

Core Logic

if request_is_unclear:
ask_clarification()

elif request_is_impossible:
say_cannot_do()

else:
answer()

Questions:

  • Is this approach already used in production models?
  • Are there better ways to detect ambiguous requests?
  • Could this be implemented at the model policy level instead of application logic?