GPT Actions, Bearer token authorization

I followed the OpenAPI guidelines on how to do the bearer token authorization or better how to define in the schema, but for some reason the GPT is unable to pass the access token (from uploaded txt when given instructions) during runtime to requests, so i am always getting 401… Thanks for any response

1 Like

Build your GPTs action and OAuth by this powerful GPTs:

I’ve been running into this issue as well. Any luck with resolving this on your end?

You need to run it without authentificaiton through serverless function.

You are likely not implying to include the API key/token into the specification. Because that would be a huge security issue.
The biggest advantage of the current implementation is that OpenAI is storing the API keys encrypted on their side. And they provide the audited documentation with regards to safety and security.

When you say ‘severless function’ in the context of building GPTs via the ChatGPT interface, does your service offer the same level of security?

Regarding the original question;
Often these cases, where the bearer auth doesn’t work as expected, can be resolved by setting the method to ‘Custom’ and then defining the name in the OpenAPI specification.

1 Like

You do a public no key endpoint to the serverless function, and that function does the auth with the endpoint you wanted to go to. You basically add one layer extra. As i am building private GPTs i wont be subjected to some spam attack on my public serverless function endpoint that way.

1 Like

I see where you are coming from because sometimes it’s a real puzzle to call a API via Custom action.

Which implies that any person who could access a GPT today but maybe not tomorrow is still able to use the functionality publicly exposed without authentication.

Would it not make sense to add a OAuth to ensure that only users with the respective rights can consume the API? At the end of the day it’s a priority to protect one’s access to connected services in order to not get banned or incur massive costs.

If its a personalized GPT for non-technical user base, they are glad it works and wouldnt even have a clue how to abuse it.

Look at it from this perspective: even non-technical users know how to log-in to services and why that’s important.

I’d say, with regards to OP’s question a possible solution is to implement a second layer where the GPT contacts your own server and from there connects to the API as needed.
As you are aware this can bring a whole lot of advantages to the table.
Just make sure your server is properly secured.