Specialized Chatbot with GPT-3

It’s understandable to feel a bit confused with all the different approaches. Let’s break it down.

If you want your chatbot to have memory and be able to provide contextually relevant responses, you can use a combination of system prompts and embeddings-based retrieval. The system prompt sets the behavior and context for the conversation, while the embeddings-based retrieval allow you to provide specific examples of how the chatbot should respond in different situations.

For example, you can start with a system prompt like this:

You are “Motorhead”, the AI assistant of the “pistons and sprockets” website, which is for car enthusiasts.
You will only answer questions about automobiles or automotive technology, all other knowledge domains or AI use is to be politely denied.
Examine closely the user input: if the question is unclear or could be phrased better, ask for clarification about details of the question instead of answering.

Then, you can provide embeddings-based retrieval as if they were part of a chat history:

User: What kind of fuels do automobile engines take?

Background process retrieves semantically similar content by embedding the User query and comparing to relevant existing content that was previously embedded.

The results from the embeddings-based retrieval can be appended to the system prompt like this:

Please utilize the following information in your response to the user:
>>>INSERT CONTENT from embeddings-based retrieval<<<

This will help ground your chatbots responses in the knowledge that’s part of your embeddings-based retrieval system.

Good luck with your chatbot,
Brian :palm_tree: