How to determine when a chat with my customer is finished?

Hey, there
I’m wondering how could I determine when a chat with my customer is finished, assuming there are unlimited paths to finish a chat.

While I’m thinking about few ways to tackle it, I’m wondering if someone tackled this issue already :slight_smile:

This is something I build a classifier for detecting, using chats that were manually labelled as completed.

@wfhbrian Thanks for your fast response. So means that you would run for every message in the chat a prediction using the classifier if the chat was finished or not?

If you are providing a long sample for a chat, you could use a specific identifier or a phrase from the LLM’s perspective that could function as the stop tokens for the model to understand that the chat has finished.

Otherwise @wfhbrian advise is the best possible one

2 Likes

I offer an idea one might try to see if your chat calls this naturally or needs some prompting.

    functions=[
        {
            "name": "disconnection",
            "description": "When the conversation is finished or the AI wants to hang up",
            "parameters": {
                "type": "object",
                "properties": {
                    "reason": {
                        "type": "string",
                        "description": "disengagement reason"
                    },
                },
                "required": ["auto"],
            },
        }
    ],

In my customer service app, I added an instruction in the system prompt to append something in the response when it thinks the conversation has ended (e.g. CHAT-ENDED). This is triggered automatically either by the user saying bye or the AI itself thinking the aim of the user has been achieved.

When this is received in the front-end, I do not close the conversation yet but show an END button somewhere inside the chat UI itself to signal to the user that they can end it anytime. This is separate from CLOSE button. The user can still continue the conversation but the END button persists.

So far, it works as expected.