Hello! I’m hoping to use the Open AI chat API to solve a very specific problem that, for the LLM to get right, requires breaking down a complex question, regarding a short text input of one or two sentences, into six or seven simple, sequential questions, some of which build on previous answers. Apart from the final input, I also need to extract the responses to individual, intermediate questions which I also process later for other purposes. I was hoping to write the entire interaction with OpenAI in a single Completion instance/block, for efficiency, and also, to avoid spending on tokens, as otherwise I’d have to feed the API my original input (context) six or seven times, as well as previous responses (context) into each different instance, creating huge duplicities. I thought it was possible to do this by feeding the input into a system message along with my instructions, and then asking each question in order in a different “user” message, and then extracting my responses. Is it possible?
outputs me a plausible answer to the final “user” message, but line:
print(response.choices.message.content[“question number here”])
just gives me a specific character of the answer to the final message, and not the answers to specific intermediate questions as I’d like. Also, I am unable to tell, from the final output, if the LLM is taking into account my intermediate messages at all to answer the final ones.
Is what I’m trying to do possible and how do I build it?