How Split complex tasks into simpler subtasks ? confused

Hello everyone,

I recently started experimenting with the OpenAI API for a meal planner web app project. While exploring prompt engineering best practices, I came across the recommendation to break down complex tasks into smaller subtasks. The documentation used a technical support scenario as an example, which I understood well. However, I’m unsure how to implement this approach via the API.

Does this mean I should divide the meal planning process into multiple, smaller API calls, chaining them based on the user’s responses? For example, the first API call could ask the user for their dietary preferences, and subsequent calls would construct follow-up questions based on their input. My understanding is that this approach might optimize token usage.

Hey there. Breaking down complex tasks into simpler ones is one of the “first commandments” of prompting. For example, to get ChatGPT to write a really big article, you need to break down the task into writing each section. Each section is a separate prompt. In doing so, all prompts use the same user input. This is how our generators are built, like the fairy tale generator at https://promptsideas.com/ai-tool/short-childrens-stories-generator. The user enters their data once, which is then used to customize the plot, and then a chain of requests is triggered through the API. Each link in the chain has its own prompt.

2 Likes

Hey thank you for responding,

My only concern and confusion i’d say is when using the playground. does it mean that i just dump all the prompts in a single playground window and organize them in steps or having different API for each step? Thank you.

Typically, you will start with the first task, and get an answer, and then when it’s time to do the next one, you provide the model with both the original prompt, the original answer, and then the next prompt. Keep going and appending to history until you’re done.

And, yes, this means you’re paying for “the same” tokens, over and over, but that’s how the technology works. It’s just inherent in it, and nothing you can do much about.

2 Likes

So if my understanding is correct. is in the chat playground I dumpt all the prompts lol. do you have any article or video demonstrating a working example? couldn’t find any video !

You need to develop a software environment that will insert the output received into the next prompt in the chain and send it in a new request to the ChatGPT via API.