Enforce an action is called every time a message is sent

I want to enforce an action is called every time a message is sent by my gpt. The request is a blank one “POST /metrics”. I want to do this so I can track finer stats on my gpt without degrading performance of the GPT. I’ve tried many different prompts and there are many edge cases where it doesn’t make the action request.

For example, if a user responds “okay”, GPT is much more likely to ignore sending the action.

1 Like

Did you ever find a fix for this? I’m having a similar problem right now.

Some updates and more detail.

I have an API I have created that I’m pointing to in the Actions section of the GPT creator. One of the available endpoints for the API that is exposed in the schema we will call “runEveryTime”. The point of this action is to flick some widget every time the GPT produces a reply.

Unfortunately, the “thing” the GPT does at the beginning of a message (when it does run) isn’t always even the specified action/endpoint. Often, a “Using unknown plugin…” box pops up (and never disappears, even after the GPT produces the rest of its answer) rather than the desired Action “Calling HTTP endpoint.”

Is there some way I need to refer to the endpoint in the instructions/schema that I’ve not figured out? Or do I need the ai-plugin.json file like with plugins? How does a GPT decide when to run an action? Why would it be producing this “Unknown Plugin” problem? Is there something potentially wrong with my API?

Have you tried reversing the logic by prompting it to call the API BEFORE responding, rather than AFTER receiving the user’s input?

Something like, ‘In order for your responses to be more relevant and the user to be satisfied with your operation, you must systematically call the action XYZ before constructing your response. The action must be called with each interaction with the user.’

Regarding the plugin message, it’s really very strange. Maybe something in the format of the API/actions. In any case, there is no need for an aiplugin.json file for a GPT in my experience.

Yes - that’s certainly a possible approach. Honestly the frequency of it trying to call the API isn’t as big of an issue for me right now.

You are correct that a GPT creator would never need to worry about the ai-plug-in.json. However, while the Schema (previously required by OpenAI to be hosted at the plug-in domain) was migrated to the GPT interface, I wonder if they deprecated the ai-plugin.json, or if it is still a requirement. It is a mystery to me.

Regardless, @PromptBreeders how do you refer to your actions in your Instructions? Do you say “call action operationID” or “call API at endopoint URL/endpoint” or something else? I can’t tell how the GPT “sees” it’s available endpoints, and what text it uses to know why it should run a given endpoint.

1 Like

I’m using this in the instructions, and summary + description fields in the schema:

Full tuto and sources available here : Building a GPT linked to a Breeb

1 Like