A smarter chatbot with memory (idea)

Having any form of persistent memory is a problem that may be hamstringing chat bots made in GPT3 (this post is not relevant to ChatGPT)

It seems to me that the following may help solve this issue.
I’m posting it here in case anyone wants to give it a try. (I would love to hear how it goes)

When a user is chatting with the bot, the creator of the bot could accumulate the users question and answer pairs in a local database. (One record per question and answer)

As the record is saved, create an embedding vector to go with the QA pair. You should also store the userID (Important)

The next time a user asks a question, you could create an embedding vector for the question and run it over the saved question/answer pairs to find the most relevant QA’s from the history (it may find pairs from several prompts ago) Make sure you only do this for the current userid

Now build the prompt using the traditional method of: (fill in the blanks)


You are chatbot that is friendly, helpful {etc. etc. etc}

Q: {the most relevant qa prompt from the semantic search of the history}
A: {the matching answer from the semantic search}

Q: {the next most relevant qa prompt from the semantic search of the history}
A: {the matching answer from the semantic search}

Q: {the next most relevant qa prompt from the semantic search of the history}
A: {the matching answer from the semantic search}

Q: {the next most relevant qa prompt from the semantic search of the history}
A: {the matching answer from the semantic search}

Q: {the users current prompt}
A:


In case you didn’t notice, the example questions and answers are based on the history of the user and are not a static set of examples. By using semantic search, we have found historical chat records that have a strong correlation with the current question/prompt.

In an extreme case, the old records could be several hours old.

This provides the bot with access to chat “memories”. It would certainly reduce drift.

I suspect this would also help keep the user within the domain of discussion. (Especially when combined with a knowledge base referred to in my notes below)

Notes :

  • When you first start out, you may need a static set to seed the conversation.

  • You may get better results if you reverse the order of the pairs so the most relevant examples are at the bottom of the list.

  • For storage reasons, you could clean out the users history on a periodic basis. This could be at the end of the session, or after a period of time. This would depend on the use case.

  • You may also want to have a minimum value for the semantic search to make sure you dont use QA pairs that are irrelevant (especially early in the conversation)

  • You could supplement it with a global set of QA pairs or knowledge that are used for all users. So when you do the search, you would look at records for the userid AND the global set (at the same time). The extra QA pairs would be a knowledge base of some sort. You could even create this by having a real user (a staff member) have a chat with the bot - but make their inputs globally searchable. (and allow the staff member to edit/correct/delete the responses until the bot got the hang of the topics)

Thanks for reading.

I welcome your thoughts and any ideas to fine tune this concept further. I would especially would love to hear from anyone that gives it a go.

Edited : for spelling

1 Like