How can I make the bot a little bit smarter?

For example… if I say “hello” to the bot, it response with “sorry, I don’t have that information”

Ok so… Do I need to feed every little detail to pinecone on what the bot should response back?

Here is the code sample:

const template = You are a funny assistant bot. Answer the question as truthfully and accurately as possible using the provided text. If the answer is not contained within the text below, say "Sorry, I don't have that information". Text: ${metadataString} ;

const result = await openai.chat.completions.create({
messages: [
{ role: “user”, content: message },
{ role: “system”, content: template },
],
model: “gpt-3.5-turbo”,
});

Now, If I tell him “tell me a joke”, then it tells me a joke.

You need to explain and justify. Why would a “funny assistant” be required to deny action if the answer to any question is not already given to it? Should it be funny, or deny that alignment also?

Why are you supplying roles in the opposite order, (again)?

The form of an AI that knows its job and receives new knowledge:

1 Like

This topic is covered in the short courses section With Andy Ng and OpenAI Staff here

Oh boy… I think I might just follow your advise and put the pinecone feedback under the assistant section.

Anyway… for now I do have the knowledge lookup (aka pinecone feedback) under the variable metadataString… is it wrong to put the variable the way I did it? Is that an “old way” to do it? … I ask because apparently I have seen code samples doing it that way, and well, of course in the internet everybody is always right :stuck_out_tongue: … I see that using the Asistant the knowlege base is strictly separated from any prompts, and it thinking about it, it might be better…

More system prompt = more bot confusion about how it is supposed to operate. The system role instruction understanding is already degraded in the current gpt-3.5-turbo.

You can do whatever works, as there is no ultimate “here’s how to do it” guide.

What is in OpenAI GPT guide, putting some text along with the user question, is odd, perhaps adapted from GPT-3, because it would look like the user supplied the very text they then ask about, or the text itself might be seen as a user question, and you’d also have to yoink that back out of chat history to leave behind a user role with the missing space.


One of the more intriguing roles to use is “function” - as if a function was called. OpenAI should have included a “documentation” role (for RAG) from the start, but the return from a function kind of serves the same purpose. Let’s try it out without actually using a function:

response = openai.ChatCompletion.create(
    messages=[
        {"role": "system", "content":
         "You are OpenChat, a large language model AI assistant."
         "OpenChat is the product information system for Jack's consulting service."
         "AI pretrained knowledge cutoff 2021-09-01."
         },
        {"role": "function", "name": "knowledge_base_retrieval", "content":
         "Information to answer the next user question:\n"
         "Jack's information technology services: "
         "AI programming; AI prompting; data augmentation; custom AI applications "
         },
        {"role": "user", "content":
         "can Jack make an AI that answers about my PDF?"
         }
        ],
    model="gpt-3.5-turbo-0613",
    max_tokens=300,
    temperature=0.2,
    # functions=function_list
    )

One thing about the function role is it cannot be directly asked about or continued like it was a previous user or AI answer. It serves the purpose of providing information, and the answer is even a bit more satisfactory that if I provide the function that an AI might have called:

“Yes, Jack can create an AI application that can analyze and extract information from PDF documents. This AI can answer questions related to the content of the PDF, such as summarizing the document, extracting specific data, or providing insights based on the information in the PDF. Jack’s expertise in AI programming and custom AI applications can be leveraged to develop such a solution for you.”

When supplying external information to the model, you need to think about the role the information plays in the conversation.

  1. For context in the User Role: If the database content represents information that the user is providing or sharing with the model, it should go in the user role. For example, if the user is sharing an excerpt from a document and asking the model a question based on it, the content should be formatted as coming from the user. Most of the time vector retrieved information should go here, unless there is a specific effect you are trying to obtain.

  2. For context in the Assistant Role: If the database content represents information the model should already “know” or “remember” from previous turns in the conversation, it should go in the assistant role. This would be like reminding the model of what it said or knew earlier, e,g., if you needed the model to think it had provided a list of recommendations, and you are retrieving those recommendations from the database to provide context for a follow-up question.