GPT-4 keeps lying instead of saying "I don't know"!

I’m using embeddings to retrieve relevant information to answer client messages.

Unfortunately, as can be seen in the examples below, GPT-4 often makes up answers when it doesn’t know them.

The prompt:

You are an AI assistant responsible for responding to client messages for a surfing club.

Use the following pieces of context to answer the user’s message at the end. If you don’t know the answer, just say “I don’t know” and explain why, don’t try to make up an answer.


User’s message: {new_message_string}
Helpful Answer:

What can I do to make it more reliable?

Three examples:
You are an AI assistant… + don’t make up an answer

You are an AI assistant… + don’t make up an answer + no context at all = making stuff up.
[screen shot]

You are an AI assistant… + don’t make up an answer + the right context = Right!
[screen shot]

In GPT-3.5-turbo it does work (but gpt-4 is much better for another need in this prompt: Hebrew).

Issues with your prompt:

  1. Instead of assigning an identity to the prompt, such as “AI assistance responsible for…”, simply describe the task. This helps avoid creating a fictional scenario, which may lead to inaccurate information.
  2. No need to use “explain why” and “don’t try to make up”. It’s better to avoid using negative verbs.
  3. Use a more formal style, like: “Please respond to the user’s question based on the given information. If you cannot find the answer, kindly state 'Answer cannot be found”

Use following instead:

Answer the question based on the context below, and if the question can't be answered based on the context, say "I don't know" 

Context: {context}


Question: {question}


This might also be a good application of chain-of-thought prompting (in addition to the good suggestions by @kevin6 ). You can instruct the model to start by thinking through the problem in tags (or however you want to structure it), specifically laying out (1) whether the information exists and (2) what the most relevant excerpt(s) is/are if available and (3) how to respond based on 1 and 2.

I’ve found this to be highly effective when trying to get the model to correctly follow conditional statements.

1 Like

This works great! I appreciate the help!

I want the AI bot to answer in context to the chat history and the extra context I pulled using the embedding search. I’m debating a few approaches:

  1. Sending the chat history as messages (regular use of the chat completion API) with a new user message “answer using this context {embedding_retrieved_context} in a friendly way.”
  2. Sending just the standalone message and the embedding retrieved context. (Maybe also use the chat completion API, but with one “user” message.) I’m afraid some of the chat history context will be lost :confused:
  3. “Based on this chat history {chat_history} and this retrieved context {embedding_retrieved_context}, provide a conversational answer to this user’s new message.” Basically, placing all the chat history in one user message.

(side note: these will not be the actual prompts)

Do you perhaps have any intuition about which approach might work best?

1 Like

Assuming we are trying to create ChatGPT, we need to define the tasks that the user and the assistant are trying to accomplish.

The user is trying to get information from the assistant. The assistant tries to give information to the user.

Use the user’s and assistant’s messages array as it is. When the user asks questions. It is already in the format of a conversation. Basically, the user provides information and the assistant should respond based on the information provided.

Scenario 1:
user: asks questions that need to be answered based on previous questions
assistant: provides answers

Scenario 2:
user: asks questions that need to be answered based on retrieved documents
assistant: provides answers

We need the intent classifier to classify the user’s input in one of the above scenarios. we can use simple api to classify the intent in different scenarios to trigger the information retrieval database or any other database.

1- Search related history of chatbot and return the history as the messages array of the conversation.

2- Find related documents and return the documents along with the messages array of the conversation.

It’s better to pair user and assistant together and then embedding it.

1 Like