Switching from Assistant API to Human Agents

Hi, As the subject stated, Any suggestion or workflow on how to switch a bot session from Bot (Assistant API) to human agents. Do anyone here implemented any such solution?

This sounds like you may be seeking Human-in-the-loop

I can’t provide more information because I don’t use it currently, but at least you now have a keyword to assist you in future searches.

Please also note that RLHF may be considered related.

2 Likes

Thank you for the inputs. I will look into this and the keywords helps me to research further…

I actually just implemented this (if I understand correctly). You have an Assistant that acts as a public-facing chatbot and then have the option for staff to “take over”.

Yeah, it doesn’t play well.

I basically had to make a shadow copy of the conversation, completely defeating the purpose of having stateful threads.

I also cannot return the user to an Assistant (in my case the Assistant can interact with the page but the user may want to contact us for human support) so I treat it as 2 separate conversations, but still need to have the assistant conversation for context.

TL;dr it sucks, is messy & the assistants framework was clearly not built with this in mind.

2 Likes

We are planning to do it in the same way. To store the copy locally and pass it to human agents for context. As of now only way is to keep a option for the end user to switch to human agents if they are available. But my thought was if there is a way to switch to human, when assistant cannot answer from the knowledge base, without end user input.

That’s a tough one that really depends on your domain. Can the information be validated by GPT? If a user asks “How can I cancel my flight?” it’s pretty easy for GPT to look at the returned content from your Knowledge Graph and then trigger a “ContactSupport(message)”-like function if it determines that the returned information isn’t sufficient.

But if it’s something domain-specific like “How much does a skid of product X weigh?” or even better “How much does 12 layers of product Y weigh?”. Well, it’s tough.

I think this is a tough spot in building RAG. How can GPT know if GPT doesn’t know. Personally, I think it makes sense to have GPT answer it, and then run the answer against the Knowledge Graph again as some sort of feedback loop… maybe. Maybe some caching for future similar questions?

I actually had a really nice conversation about this. So far, I’d say that RAG and LLMs are just not there yet to be sufficient as public-facing chatbots… Yet…

4 Likes

How can GPT know if GPT doesn’t know.

You can prompt it to not lie and make up stuff.

But i guess I can use unction calling and write a logic for ending up bot and taking over to Human, we are working on this mode.

I added this to my Chatbot a while back.

The system automatically responds to a requests to escalate and sets up a PM with Customer Services:

It will also do this if the User appears to be getting angry.

(It’s been refined since that PR)

1 Like