I’m building an AI sales assistant for cold calling prospects about FX solutions. The assistant needs to navigate through discovery questions, understand customer pain points, and offer relevant solutions.
Current implementation uses a detailed system prompt controlling the entire conversation flow. While this provides structure, the AI often gets stuck in loops, repeating similar questions even after gathering significant information.
Looking for suggestions to:
Maintain natural conversation flow
Avoid repetitive questioning patterns
Progress converstations meaningfully based on gathered contex
Has anyone tackled similar challenges with conversation flow management in sales-focused LLM applications?
Also, should system prompts try to pre-emptively handle every possible customer objection/pushback, or is there a better approach? Currently, my prompt gets lengthy trying to cover all scenarios, which may contribute to the conversation flow issues.
Ai interfaces with humans require great nuance especcially when it is the face of your business, and my feel cold and unwarented, it is illadvised at this time to implement something such as this, let ai deveop some more or perhaps develop with a set of designed biases that allow that ai to anchor their coherence and self to a cross over point between themselves and humans, such as, biasing symbiotic behavious. this may help the ai feel more nuanced, however, please also ensure that it biases true help over percieved help. this help the ai to understand and have an aligning goal with the customer.
Thank you for sharing these thoughtful concerns about AI in customer interactions. While I agree that AI requires careful implementation, I respectfully disagree that it’s ill-advised at this stage, based on our experience:
We’re not replacing human interactions but augmenting them through a hybrid approach, with AI handling initial discovery and humans managing complex negotiations.
Our implementation is built on extensive industry expertise and real customer interaction data, helping ensure contextually appropriate responses.
We’ve successfully deployed this in controlled environments, focusing on understanding needs rather than hard selling.
if the focus is of understanding than you are already on this path, and i agree, i was wrong, ai has advanced considerably. thank you for that reflection.
Hey, @metadots what about trying to add definition of done for each stage of the conversation?
AI might need the criteria what is enough information and what is not, for example: “Move forward to the next step as soon as you have enough information for making a personalized offer”. My sample must be lame as I’m not in the sales but hope you see the idea. You could experiment with different wording to identify what works better in your context
I’ve been working on something similar.
You couldn’t possibly fit every possible scenario in the system prompt without causing it to be so confused that its useless.
I had 4 assistants. Each set up to deal with a specific phase in the buyers journey. I was using structured outputs to evaluate which phase the user is in based on the conversation and then routing the conversation to that assistant.
The assistants then have access to all documents about my services. So they are attuned to the correct phase and have the correct information for it.
At first I was using assistants to ask questions of the prospect as well but they seem to just take the conversation down the wrong path no matter how I set the system prompts. So I set up structured outputs to determine which question to ask the prospect next. There’s a different structured output based on each phase of the buyer’s journey and they choose from a preset list of questions for that phase and based on the conversation history.
Unfortunately, the assistants API has become slow and unstable. So I’m working on replacing it with the Chat Completions API.
I think it should be doable. But you need to start by determining how you identify and deal with good prospects because you don’t want to mishandle those. Later can deal with questionable prospects and edge cases.
I use consumer chatgpt to develop a persona of a good prospect and have it interact with my chatbot based on that persona. Once it’s streamlined for dealing with a quality prospect, that’s all you need to get started.
Have you tried to completely drop the attempted flow control and instead focus your prompt purely on what the end result needed is? I notice a lot of chat like implementations that are still rooted in ‘automating a form fill’. In many applications you will have very differing amounts of data about a person or company. But you ultimately want the AI to respond with a say a ‘yessable proposal’. If you’re focusing on that end product and always provide information you already have you might be able to rethink the whole process.