Hi all!
I fear this is going to be a somewhat amateur question.
Through GPT-4 I have been able to run a fairly standardised prompt chain to get to a sophisticated output from a couple of inputs that I want to be able to create automatically using the OpenAI API (i.e. instead of manually running the same 5 prompts each time sequentially). I’ve built it using prompt chaining because combining all of the individual components (which are used for the final output) into a single prompt does not go down well with GPT.
I have not yet used the OpenAI API as I am not from a technical background. I’d be really grateful if anyone could help me with the following questions:
- Can I recreate this using the API? Will the system I build have a different ‘memory’ to that of GPT-4 whereby the memory from each prompt chain is stored in that chat (and therefore have a different output?).
- What is the most efficient way to build this?
- From what I’ve explained, if you’ve seen this approach before I’d appreciate any tips/advice for considerations I may not have thought of yet.
Thanks everyone, Happy New Year!