We want to process the entire conversation in an action API (the messages created by GPT and the user). We also want to enforce that the action is called after the GPT creates some initial response. This initial response should also be included in the conversation sent to the action API.
A workaround we’re relying on is telling GPT to repeat the conversation in an action call and fill out the parameters with the current conversation. This has lead several problems:
- GPT provides incomplete or hallucinates words and messages
- Adds a lot of latency because GPT needs to generate the json blob being sent to the actions API
It’s finicky when getting GPT to call the action after responding. We’ve only been able to get it to do it a few times.
user - “hi gpt I want to find restaurants near me in San Francisco”
gpt - “sure! I’ll take a look!”
gpt calls action with [“user”: “hi gpt […]”, “ai”: “sure! I’ll take a look!”]
gpt - “found restaurants […]”
Not an expert, but it does look like that’s pretty close. Possibly even just some tuning of the prompts you’ve got. I’ve had good luck with getting Gpt4 to summarize things if you don’t need the messages verbatim. I do something similar, returning messages via a function call for processing on my end, and summarizing seems to really get the gist and phrase it well. It also forces it to think about messages and conversations and all the rest which seems to help it get things right when calling functions.
One of the applications for this was to clean up code. Before all this new functionality (and way better code coming out!) I was extracting code blocks from returned messages then sending them to Gpt3 with: “Your only role is to confirm this is valid Python. Use the function available to you to save only valid python to a file with a meaningful filename.” Arguments were file_name and file_contents. This blew my mind, I can’t think of a single instance where it failed. Even crazier, I did this because I was having problems with stray characters and indents I was trying to get it to sort out, but it didn’t stop there. Turns out that if there are any errors in the script it fixes them. Including turning hallucinations into runnable code and all the way down to restructuring the whole thing in a better way!
I think part of the reason it worked so well is that the entire conversation was always just a system message, the code to look at, and a single response.