Hello everybody,
What I am working on
I am working on a question-answering system. Our data is in the format of Question-Answer pairs.
I am using a vector database to search for matching questions.
Initial and current state
Initially, I was embedding only the questions but soon realized that sometimes, because of how the user asks the questions, the vector similarity search would not match the “question” field in the vector database, but the answer that the user is looking for is present in the “answer” field in the vector database. Then I combined the Question and Answer when computing the embedding. This improved the result, but I still found a case where it was not working well:
Current problem
I am using only the last user query to search the vector database, but people usually expect the bot to use detailes specified above. Of course, using multiple “user”, “assistant” messages in ChatGpt is possible, but when searching the vector database, I don’t use the whole conversation.
One example is if the user asked a question, got an answer from the bot, but wants a clarification. The user says: “I actually have X type of appointment”. When querying the database, I would like to query with a phrase that contains both the question that the user previously asked + the details that he provided in the last reply to get more relevant responses.
Question
Is there a method of improving the flow of the conversations, like using chatgpt functions, a scratchpad…? With these types of systems, normal users expect them to behave like humans, and they ask the question as if they’re talking to a human.
Thank you