Taking users through a customer journey (dynamic prompting, different steps)

Hello,

I’m building sales chatbots for companies. To simplify things, imagine a user going to an ice cream shop:

  1. Initially, the user has questions.

-Different sizes available
-Prices of each size
-Flavors available

  1. The user can then reply different things:

-“Awesome, I want one scoop of chocolate please”
-“That’s too expensive, I’ll pass”
-“Great to know, I’ll visit you tomorrow”

This is an oversimplification, but depending on the replies by the user, conversations can go in different scenarios/paths.

Previously, we were dynamically changing the prompts to adjust to these different scenarios. Since the Chat API works by passing the conversation so far and a system prompt, we’d pass conversations that looked increasingly liked this:


First Call:
User: Hello
Assistant: Greet the user hello.

And the API call would then return “Hello and welcome!”

Second call:
User: Hello
Assistant: Hello and welcome!
User: Are you open today?
Assistant: Ask the customer what store he’s inquiring about.

And the API call would return “Which store are you asking about?”

Third call:
User: Hello
Assistant: Hello and welcome!
User: Are you open today?
Assistant: Which store are you asking about?
User: The one on 5th Avenue
Assistant: Answer to the user. The store on 5th avenue is is open from 10am to 10pm Monday to Saturday. The store on 10th Avenue is temporarily closed. Right now it’s 3pm on a Monday.

And the API call would return “The store is open today till 10pm”.


Why this worked for us:
This method of dynamically changing the last assistant message lets us control the flow of conversations smoothly and control token billing. I’m not sure if there were easier ways to do this.
We didn’t change the final Assistant message for EVERY API call, but we essentially had “steps” and we could control the flow of conversations guiding the AI with better instructions as the conversation touched upon different topics or went in different ways.

Our big challenge:
Because we controlled the prompting, we were never giving ChatGPT all the information to answer questions. In the above example, we’d need to do previous checks like “The user is asking something” - “The user is asking about store” - then pass all the information about stores schedules for the AI to reply.

Hence, our biggest problem was anti-hallucination. We never got around to implementing any RAGs or vector search on Pinecone or anything like that.

The new Asssistants:
We just tried the Assistant API and GPTs, and they work amazingly. The problem is that Assistants work with Threads, and we’re not sure where we would try to prompt dynamically. We’ve thought of different solutions:

  1. Having a thread look like this:
    User: Hello
    Assistant: Hello and welcome!
    User: Are you open today?
    USER: Ask the customer what store he’s inquiring about.

Results so far are mixed.

  1. Changing the instructions of an assistant dynamically

I think that instructions on Assistant API = system on Chat API. System guidelines never worked as well as prompting guidelines. We’re also unsure if changing the instructions of an assistant affects a single thread or every thread that the assistant is powering.

  1. Making every “step” of a conversation a different assistant with different instructions and building our own system of switching assistants on the back-end.

  2. Implementing our own RAGs and ignoring Assistants API for now.

Thanks for reading. Hopefully that was clear. Interested to know what others are doing. Happy to take it private if needed.

1 Like