Handling conversations based on dynamic data lookups

Need to guidance on how I can approach this problem. So, I know how to index static documents and generate prompts to drive the conversation. However, I want to do something like this.

Customer: I filed a case and have not heard anything, I want to find out what happened with it
AI: Sure, I can help you with that, can I have the case number?
Customer: yes, it’s 78967890

AI should call a rest endpoint for example GET /case/78967890
say it returns a json response with a status field and eta { "status": "IN_PROGRESS" , "eta": "5/5/2023"}

I would like the response converted into conversation like so:

AI: I checked on the case and it looks like it’s currently in progress and the eta is May 5th, I suggest you check back again after that if there is no progress

How can I achieve something like this? I am thinking I need to use agents? Trying to get some clarity on which direction to proceed since the action to call the REST endpoint should also be driven by the AI.

My recommendation would be to step away from GPT for most of that. You can parse the customer input for the case number, call the rest endpoint, parse the response and either respond directly to the customer in a standardized fashion using f-string or else Prompt GPT to come up with something like what you have there (keeping in mind that that will be somewhat difficult to standardize).

2 Likes

From your description it seems that you want a Chat Plugin

I’ve done exactly this with a simple Google sheet. If you want to drop me a note (bill.french@gmail.com) happy to create an example that uses your data.

This article shows how I transformed a Google sheet of FAQs into a chatbot. In your case, the data is dynamic, but my approach works for data that are fluid as well.

Thank you for the responses!!

@shaneayers I do like this idea and tried it out and it works, what I did was to add a prompt to echo out the case number in a particular pattern like so:

If the customer asks about the status of the case you should always ask the customer to clarify the case number.  After the customer responds with the case number you should always respond with "Got it, let me look up the case number" followed by the case number followed by ".." and then end the response.

Then when I stream the GPT response back in langchain I trigger my handler when the aggregated response matches the pattern and then fire my own response based on my lookups.

I like the solution because it’s simple. My client integrates all responses as part of the conversation history so this should become part of the context for further queries. Still doing some testing on this.

But also want to investigate the use of agents in langchain and see if this can somehow be triggered automatically without adding in all the logic above (i.e. triggering based on patterns).

@bill.french Thank you for that article, I will read it shortly :slight_smile:

@sps I believe the GPT plugin functionality is rather new and similar to agents in langchain. Will explore it more, thanks for the suggestion.

1 Like

(i.e. triggering based on patterns).

I’d be interested to see what you find.

I do this by wrapping my agent’s logic with embeddings designed to isolate the aggregated learner data.

For example, assume you need to aggregate analytics from 20 customer survey fields to answer users’ questions about the data. If the question only concerns one or two survey fields, you don’t want to overload the prompt with unnecessary aggregations.

This is avoided by using embeddings to determine which of the survey fields are required and then using that scope to guide the creation of the learner shot, which generates the final data narrative. This allows only the most relevant data to be used by the learner prompt (more accurate outcomes) without breaking the token ceiling and/or managing the cost (fewer tokens required).