Mavenoid blog: Reliable generation of troubleshooting bots with GPT3

Sharing a blog post about our work at Mavenoid applying GPT3 to product support. How GPT3 is changing product support and, in particular, how we use it to speed up the creation of bots which are also good at troubleshooting.

Most interesting for developers: through various tricks and notably Enhanced Search, we managed to reach 97 % accuracy on a key dataset.

Blog: Why GPT-3 is a big deal for customer support

All questions welcome!

p.s. We’re hiring data scientists and others in sales and engineering.

6 Likes

Great post, @cwenner! Are you using the Answers endpoint at all for this?

@cwenner This is awesome, thanks for sharing! I just signed up for the free trial :slight_smile:

1 Like

@stevet - Glad you liked it!

At one point we used the Answer endpoint but now essentially all OpenAI queries are reduced to enhanced search + completions. I think /Answer is good for initial prototyping but to get high accuracy, you want to customize the prompt.

Ofc that’s mostly for generating proposed answers to customer questions - for troubleshooting and instructions, completion seems more suitable. It consists of around 20 subtasks all in all now.

2 Likes

@raf - Neat. I’m rather curious to see how that turns out as well. We’ve been most focused on physical products.

thanks for sharing, @cwenner!

1 Like

A prompt parameter to Answers endpoint would be a nice addition to API

1 Like

Is this the “prompt” parameter I was asking for? examples_context

Could you please expand on this and why examples_context did not fit your application?

You can get explanations in the same way that you could from a human :). Ask it to motivate an answer.

Question: Does it make sense to call birds reptiles?
Answer: No, because birds are not reptiles.
Question: Why?
Answer: Because birds are warm-blooded, lay eggs, and have feathers.

In practice, you can chain the prompts - asking for an answer, then asking for a justification of the answer.

Temperature 0 is recommended and ofc the formulations of the questions influence the answer. When digging to understand why you are not getting the right answers, it involves some detective work. It is a bit debatable whether this approach answers “the reason for the generation”, but I can there are valid concerns in either direction on that.

(Sorry about not responding sooner)

A prompt parameter to Answers endpoint would be a nice addition to API

I would agree and seen another user request the same :). Customizing the prompt in Completion mostly works though I believe the Answer endpoint has the advantage of circumventing the token limit.

Could you please expand on this and why examples_context did not fit your application?

As far as I can tell, /answers works like this (minus details):

  1. Run /search against documents/file looking for question. This finds a number of context documents.
  2. Fill in a pre-defined prompt template with the found context documents, the question, and also the examples + example_content.
  3. Run /completions on the filled prompt to generate answers.

So there are three reasons:

  1. example_content is just one of the parts that get filled into a prompt template. If you want high accuracy, you may want to replace the prompt in whole and just changing a part of it doesn’t enable that.
  2. It is not clear that providing examples or examples that require their own context is advantageous to the question-answering task. Quite often, you may want to have a common context both for the examples and the question. Unnecessary texts can also cause the completion to build on this unrelated text rather than the intended task, so the less text that is needed to get the right style of answer, the better.
  3. The examples_context also takes up tokens that you may prefer to use for the context relevant to the question that you want answered.

Hope that makes sense!