Advice needed on a specific use case

Hello, we would like to answer customer questions with a model trained on our past conversations with that customer and the their website content so that the conversation is in context and accurate.

Our plan is to train a model for each customer as a batch work once a day and use that model to run the conversation.

I am sure this is not a new use case but i can not find a clear answer on how to do this, could you please provide guidance?

Thank you,

Mehmet Dilek

Welcome to the Forum!

You can have a look at these posts to get some ideas on how to approach this using embeddings:

  1. Use embeddings to retrieve relevant context for AI assistant: This tutorial focuses on using embeddings to retrieve relevant context for an AI assistant, building upon a simple chat assistant tutorial with the Chat Completions API.

  2. Embedding past conversation data for context memory & retrieval: This post discusses different approaches for embedding past conversation data into a vector database for semantic query purposes, including fine-tuning or training embeddings models.

  3. Specialized Chatbot with GPT-3: This post breaks down the use of system prompts and embeddings-based retrieval to give a chatbot memory and the ability to provide contextually relevant responses.

  4. Infinity Memory implementation: This post discusses the use of embeddings and/or a vector database to retrieve relevant conversations and manage the indices of messages to remove from the message list to conserve tokens.

Feel free to follow up here if a question is not addressed by these posts.

3 Likes

Probably what you want to do is generate the context information yourself based on past conversations and other info, and then put that in a System Prompt.

The other option is just to maintain the most recent 50 or 100 chat messages from their prior conversations, and use that also, but you’ll have to submit the history (and/or System Prompt) with each request.

1 Like

Thank you for the answer, hmm. So when the client logs in, i will not be able to have a trained and ready model with the history and i need to submit it with each request. This will be time and resource consuming.

Lets say i am trying to answer a guest review and i want to set the context with past reviews for that entity and make an accuracy check if the negative comments in this new review is in line with the neg comments in past reviews, cant i just train the model with past reviews, save it to a DB and use that model to answer this new review? Instead of sending the history with each request?

Thank you.

The key point is that the AI won’t be able to remember things from past conversations at all. That’s why you have to ‘resend’ the entire context (all information required to answer something) in every request.

A reply above mentioned RAG, which is all about narrowing down to a small enough amount of relevant info to fit into a cheap enough (small enough) context window. So you could theoretically use RAG to find existing product reviews that are similar to a customer’s question, and then embed them into the context for getting an answer.

You can of course just submit a bunch of reviews and just ask the AI to summarize them all too, and maybe present the “prebuilt” summary back to the user without having to even call OpenAI again.

2 Likes

Actually that makes sense, get the corpus, narrow it down to negatives and positives and send that summary with each request. Thanks for the help!

2 Likes