What ways can I bypass the step where the gpt confirms if I want to “confirm” or “deny” accessing the HTTP endpoint?
When you are on the page to talk with the specific GPT, you can click on its name, click privacy settings, then you can adjust whether to always allow requests or to ask before sending requests to specific URLs.
Tried and it still pops up, is there another way?
No, this is a safety mechanism for end-users it should not be bypassed.
Create an action which links to a Make.com webhook, have the make,com flow call the API and retrieve the info then pass it back to the GPT.
Just more steps for the same result, if it’s asking every time now, it’ll still ask every time then.
Ahh, ok, assumed you didnt want user to see specific API request.
If you want to block all notifications I dont think you can use custom GPT’s.
You will need to create a basic app interface/chat bubble on your own website and run backend as an Open AI assistant or similar.
The OP apparently has an action which is deemed consequential by the model, so it is cracking with the user each time the action is invoked—they are seeking to bypass this.
Regardless of where the action is implemented, it will remain a consequential action and require approval.
If the OP runs a UI on their own website with their own calls to an AI model on the backend, they will not need to show anything to the user they do not want the user to see. Basically, a simple app.
This is the only solution for total control - although prob not the solution the OP wants.
You’re answering a question the OP isn’t asking.
It’s as if the OP had asked,
Hey, how do I get my car to stop beeping at me when I don’t wear my seatbelt.
And your answer is,
Build your own car.
The answer to the OPs question has already been given. You cannot bypass a GPT asking for confirmation for a consequential action.