Assistant API - Continuous Training and Feedback

I am starting to use Assistant API as my customers chatbot backend.
I would like to know if there’s any structured method to:

  1. Continuously train the assistant based on previous successful conversations/threads
  2. Automatically generate list of items/topics that were raised in a thread to which the assistant did not have a good answer in its reference files/prompts even though they were very specific to my business

One more suggestion - it would be great if I can define “flags” on a thread that will automatically be populated based on the conversation.
E.g. -

  • Customer happiness/frustration level
  • Need to pass to a human agent

Only workaround I can think of right now is the use of functions, but this is really ugly.

Any ideas?

Regarding #2. Perhaps having a thumbs up and thumbs down icon (like every chat llm) and when clicked save that chat. When you have time go ahead and review users feedback and figure out how to improve future responses.

2 Likes

I would actually expect from the LLM to summarize the topic it couldnt answer

Even better, collect responses as StoredCompletions, run weekly Evals, and have the model identify ways to improve. :rocket:

by the silence here I assume that no solutions are available?