I created a test GPT that gets instructions on what to do next from an API. So, once the user says “Go”, the GPT calls the API, the API says “Print Step 1 and then call me again”, and so on. And I have it stop after four calls.
I figure I could have a list of tasks that the API asks the GPT to execute. For starters, I’m going to have it summarize a list of YouTube transcripts that I have stored on my server. But I’m trying to think of more interesting things for it to do.
I know there is a process timeout but I still think this might be useful for short tasks. AND, I can now get the GPT to do multiple things at once and not hit my cap as quickly, hehe.
There’s a kernel of something very interesting here I think you should pursue—prompt-chaining.
You have the ability here to build several prompt-chains through your API which users would then be able to invoke with a simple command.
Normally, outside of using the OpenAI APIs to build your own prompt-chaining workflow, you would wither need to chain prompts manually, or fill up custom instructions with the prompt-chaining instructions.
Having them pulled either from an API or via the built-in Custom GPT RAG tool creates the opportunity to spread the use of this very powerful tool to many more users, many of whom are mostly unknowledgeable about any advanced prompting techniques.
You’re basically activating/initiating a conversation between the GPT and your API. Further allowing you to develop any kind of additional processing on the backend with the simplicity of the Custom GPTs as a mediator/agent between the end user and the API.
Don’t have anything to add really - it’s an interesting idea that may have potential for API developers. I’ll probably have to look into it a bit as well.
Now that OpenAI has fixed their API calling service, I have update this GPT Agent to actually send useful and commands, get replies and then call the API for another command.