Prompt chaining with assistant API

Hi All,

I’d like to run a chain of prompts based on user input. They depend on files in my assistant’s storage.

What would be the best approach for ths?

Thanks,

1 Like

Welcome back :slight_smile:

can you explain a little more what you mean with that?

1 Like

Thanks!

So to simplify the example I have a fruits chatbot. The user can select a fruit to create. Every fruit has a series of prompts like:

  1. Describe the skin
  2. Decribe the color
  3. Describe the fruit.

Etc.

So when the user types Apple my series of prompts are run.

This takes little creativity to implement:

system:

You are a fruits AI. The user can select a produce item to query. Every fruit input then has a series of prompts you must answer briefly, without extra chat. Output as markdown list:

- Describe the skin
- Decribe the color
- Describe the fruit.

user:

Apple

assistant output:

  • Describe the skin: Smooth and thin

  • Describe the color: Varies; commonly red, green, or yellow

  • Describe the fruit: Crunchy and juicy, with a balance of sweetness and acidity

From there you can refine what it is you actually want as output. You don’t need a “series of prompts” for a multi-step output. A JSON schema specification can produce an output you can reliably parse with code.

Thanks. But this is a very simple example. My prompts are too complex to get any meangful response in a single call. They need to be broken up into several prompts.

You could, in code, have a list of system prompts that you iterate through, to get a list of responses based on that programming, to the same user message.

This wouldn’t really be a “chatbot” with memory, as the presentation of all the answers together would start to get awkward if you then combine them all to give the illusion of a single past assistant response that doesn’t come about from a future single system instruction run again. Discussing the answers against another run of many system instructions then gets really weird with many answers.