How can I deal with follow up question in the conversation?

Every time a user asks a bot a question, the bot searches its document(embeddings), extracts a similarity score, and answers in the context of the document. If the similarity score is below a certain threshold value, the bot responds without context of the document. But it is useless for the follow up question. For example:
USER: How can I register for starter training ?
BOT: If you meet all the conditions, you can register from the link …
USER: What does it cost ?
BOT: I dont know.
If the user had last question like this,“what does starter training cost?” there would have been no problem.

I tried langchain ConversationalRetrievalChainas a solution, but it didn’t work consistently.
lang

4 Likes

Great question @klcogluberk I’m having the same challenge. It would be great to see a Google Colab example solution.

1 Like

Thank you, but this has not been a solution that has worked exactly right for me. I’m still looking for a solution that can work completely correctly.

1 Like

It may not be the best solution but I am using a lesser model for classification of products. Mainly because all I care about is the product name, and the specifics of the query.

It creates a filter for me to use with my vector database (although SQL would be fine, I just didn’t want to have 2 databases). It also notices when the query is a reference (“how much is it”, “where can I find it”, “what’s the size?”) which simply tells my business logic to use the last retrieved item, and then the filter is used to extract what they want

Again, not sure if it’s the best way but it’s been working great for me. With documents why not just keep a “memory” of the last item retrieved? A binary classifier could determine if it’s a new query, or using the previous one

This is a great thought. Honestly. Not many solutions out there that actually take on this subject. Also to consider: negations. What if a client says “Ugh! Stop showing me product x!”

1 Like

This is something that has bothered me for a pretty long time. I still can’t find a proper solution.

It seems like ChatGPT can barely read previous messages when given a long context to the current prompt.

For example:
User: <LONG_CONTEXT> What is the book about?
Bot: (Answers perfectly…)
User: <LONG_CONTEXT> What did I ask?
Bot: I’m sorry, but there is no question mentioned in your previous message. Could you please provide me with a question to answer?

I am sure a history of messages is passed to the API but most of the time ChatGPT just says something like “I don’t know”. That’s weird.

This issue you’re encountering may be due to the limitations in the number of tokens ChatGPT can process at once. GPT models have a maximum token limit (for example, 4096 tokens for GPT-3). If a conversation context becomes too long, some information from previous messages might be truncated or removed when a new message is added. This can result in the model losing crucial information and providing inadequate responses.

To help mitigate this issue, you can try the following strategies:

  1. Shorten the context: If the conversation has very long messages or unnecessary information, try to reduce the context to only the most relevant parts. This may help in maintaining important information within the model’s token limit.
  2. Summarize: If the conversation is long but essential, consider summarizing key points or questions before sending them to the API. This can help preserve the context without exceeding the token limit.
  3. Retain important information: When composing your prompts, include relevant information from previous messages to ensure the model has the necessary context to provide a coherent response.

Remember, it is crucial to strike a balance between providing enough context for the model to understand the conversation and staying within the token limit to prevent information loss.