Goal: I’m trying to build a chatbot to help small business answer clients as they used to.
The basic method of: embedding (pricing, about us, hours…) + “You are a virtual assistant” system message + “Answer the user message based on the context below…” in a user message; ** doesn’t do the trick** for me.
Input: I get all their old conversations with clients + some rules (e.g. prices).
Desired output: the bot greets people the same way the business owner does, asks them the same follow-up questions, responds the same as the owner did in different situations, and sometimes prints “leaving for the owner” in specific scenarios (like a client that wants a phone call).
Are there any best practices to approach it? Any ideas on how to approach it?
Can you share your exact prompt and an example of the output that you’re not liking?
I’m assuming that the core issue is, the model doesn’t know anything about the business so its using its world knowledge to answer questions in the context of what it thinks are similar businesses. You have to ground the model with facts to prevent this. That means using semantic search to pull in relevant facts either scrapped in from their website or given to you via a super detailed questionnaire.
Tokenize all your clients’ support data and store it in a vector database such as Pinecone.
Use similarity search for each message from the client to find relevant data in the vector database, and embed it into the prompt that should generate the response.
Include the history of the conversation and instruction in each prompt.
The second option involves fine-tuning. Basically, you can upload your documents/history of support requests/responses to OpenAI’s server and they will retrain a model with your data. However, this approach has some limitations as well. For more information, please refer to the documentation: