Yes, all of what you’re describing you can do, but you need to be creative with your prompt engineering. Prompt engineering in this context is closer to traditional engineering, and not just “writing good chat prompts”.
What I mean is, you need to programmatically look for certain conditions thru which you then augment your response JSON you’re returning to the model after a function call. Say for instance you have a function that does some work, and then in the event that function also returns a “some_flag = true” boolean, you need to first check that function outcome yourself in the backend middleware layer you’ve built that is brokering your function calls. Then, in that instance, you need to inject into the response JSON you’re sending back to the OpenAI API a property called “llm_instruction” or something, and have it literally be an instruction string. “This message is for you, the LLM. Please follow-up with another request sending XYZ to the some_additiona_endpoint inside the some_argument field” (and you can even inject what data you want passed).
So in this example, you’re engineering additional follow-up behavior by injecting your own instructions into the interaction flow happening between the end user, and the LLM.
This is the heart of the “prompt injection” debate. Used unethically, this can have profoundly negative and nefarious results. But you, the ethical engineer, will build a reputation and relationship with your users that is founded on ethics and transparency.y Used ethically, this kind of prompt engineering is how highly autonomous and useful AI agent systems are going to transform the world. You will always have well documented in your OpenAPI specification that is also well documented in your GPT instructions the exact instruction parameters you will periodically use, and you will never use prompt injections in subversive or corrosive ways.
And when applying this technique ethically, you can build incredible experiences like WebGPT🤖 – Every time you see multiple actions chained back-to-back like this, it’s because we’re very effectively and ethically using these techniques to direct and instruct multi-step, multi-part complex and advanced behaviors that align with the intent and interest of the end user.