Is it possible to control the flow of the conversation?

I want to build a bot that helps users find jobs, the first step is profile building.

I’d like for ChatGPT to have a “conversational” feel like you’re talking to a friend. Based on the docs, I got an MVP up and deployed, seeing ChatGPT ask profile questions based on the query params of a single endpoint.

This means if there are 10 params supported by the endpoint, ChatGPT will ask for all 10 in a single response. I’d like for ChatGPT to ask one at a time, with each back-and-forth returning more accurate data, instead of bombarding the user with 10 things to answer.

The only thing I can think of is to have a single endpoint for each param, which would not be ideal.

Has anyone gotten a conversational flow in their plugins?

7 Likes

I am also curious about this! Ideally we can start showing some results after required information, but still can ask for optional information and continue to show refined results based on new information collected

3 Likes

We have the same issue too.
Are there any ways for us to control the conversation flow?

1 Like

Hey guys!

We have a customer at PluginLab who had a similar request.

We helped him to control de flow of his plugin.

Even though it didn’t sound as complex as the usecase you described, what was game change was:

  1. Add a very sharp and exhaustive description_for_model.
  2. Add a description field for every endpoint in your open API spec.

Regarding the description_for_model, keep in mind you can add 8000 chars. Which means you have to say A LOT. Add anything that would help the Plugin to understand the expected flow here. :slight_smile:

2 Likes

@kevinpiac thanks! I didn’t know the max was 8000 chars, will try to mess around with the prompts

1 Like

I tried using a very detailed message structure but it did not follow it.

1 Like

For anyone still struggling with this, I put down my 2 cents on the topic in this post.

The TLDR is that combining a traditional bot with an LLM is possible for more control. Not perfect but works best, so far, IMO.