Questions about roles and chat history

Ok so I can include an array of questions and responses inside the messages: array of the chat.completions.create({}) object… cool…

Now, what would be role of the bot himself when I add the array of the conversation history?

^^^ I ask this because I am also working with documentation coming back from pinecone embedding responses… and wouldn’t the feedback from pinecone be included in the “assistant” role?

So far the code looks like this:

const result = await openai.chat.completions.create({
messages: [
{ role: “user”, content: message },
{ role: “system”, content: instructions },
{ role: “assistant”, content: metadataString },
],
model: “gpt-3.5-turbo”,
max_tokens: 150,
Temperature: 0.3,
});

where:
message is the input from the user… “how are you doing today Mr. bot?”
system is the instructions given to the bot… “you are an assistant bot”
assistant is where I include the feedback from pinecone “here is what I got back from the embedding”

Now… under what role this below would go???

console.log(result.choices[0].message.content);

let a system at first

const result = await openai.chat.completions.create({
messages: [
{ role: “system”, content: instructions },
{ role: “user”, content: message },
{ role: “assistant”, content: metadataString },
],
model: “gpt-3.5-turbo”,
max_tokens: 150,
Temperature: 0.3,
});
1 Like

In which role would the answer from the bot would go?

Sorry about the response above being unrelated to what you ask, although it is correct that for basic chatbot, you should have (system:programming, user/assistant chat history, and the most recent user input).

For knowledge injection, neither placing automated retrieval into an assistant role, a previous user role, or as part of the current user role are ideal. It looks like something that was said. The assistant role is probably best, and it can have a prefix like “here’s knowledge I retrieved relevant to this conversation”.

OpenAI should have included a “documentation” role (for RAG) from the start.

One of the more intriguing roles to use is “function” - as if a function was called, but without actually including a real function. These are actually injected after the user input and understood by the training in the latest chat models.

response = openai.ChatCompletion.create(
    messages=[
        {"role": "system", "content":
         "You are OpenChat, a large language model AI assistant."
         "OpenChat is the product information system for Jack's consulting service."
         "AI pretrained knowledge cutoff 2021-09-01."
         },
        {"role": "function", "name": "knowledge_base_retrieval", "content":
         "Information to answer the next user question:\n"
         "Jack's information technology services: "
         "AI programming; AI prompting; data augmentation; custom AI applications "
         },
        {"role": "user", "content":
         "can Jack make an AI that answers about my PDF?"
         }
        ],
    model="gpt-3.5-turbo-0613",
    max_tokens=300,
    temperature=0.2,
    # functions=function_list
    )

Same question and answer as last time: How can I make the bot a little bit smarter? - #6 by _j

I should add, if trying the function role, the AI will want to answer right from that information. It is only practical if you are only returning when there are very high-quality matches.

2 Likes

@_j summed it up nicely.

You’ll need to either directly inject the context into the next assistant response, or you’ll need to offload it into a function call that will “prettify” the context for chat format.

You could hack the context into a previous assistant message and then hide that from the user, I suppose, but I haven’t tried that.

Check out https://www.stack-ai.com/ for some pretty powerful visual tools to get inspiration and a playground for orchestration.

A bit of clarification: you’d want to use a previous assistant role, before the most recent user question. Sending a conversation with “assistant” as the last role will get the assistant to continue writing more of what the assistant just “said”.

Ah, yeah that makes sense. Append some chunk of context before the question is asked.

I admittedly haven’t built a chat with RAG, only narrow flows that don’t have back and forth conversation.

Ah… OpenAI doesn’t have a place to stick documentation… so putting documentation in the “assistant” role is what I am doing now.

Maybe some like this then:

const previousConversation = [
{role: “user”, content: “Hey, bot. My name is Mike”},
{role: assistant" , content: “Hello, Mike”} // this would be the actual previous response from the bot
];

const result = await openai.chat.completions.create({
messages: [
…previousConversation
{ role: “system”, content: instructions },
{ role: “assistant”, content: metadataString }, // This would be the feedback from pinecone, which in this case would not include any information of the current conversation.
{ role: “user”, content: "Can you tell me my name? },
],
model: “gpt-3.5-turbo”,
max_tokens: 150,
Temperature: 0.3,
});