@danielsinewe this means your /token endpoint is returning HTML (or at least OpenAI thinks it is), but it should be returning a JSON payload with access_token, expires_in, etc.
I have tried this approach, and my GPT is refusing to send the client_id in the request to my proxy endpoint. If I F12 and check when it submits the client_id and client_secret to the GPT backend, I can see they’re getting sent in the payload and the response returned also contains them, which implies they’re getting saved correctly, but then when it sends the request to my proxy authorize endpoint it just doesn’t set client_id.
I don’t suppose anyone else has run into this issue?
What do you see from the GPT UI side? Do you something like ‘Using unknown plugin’? I get this almost every time I initiate a chat with the custom GPT. I have to try multiple times for it to connect to the action.
I ended up getting this working by nuking my GPT and recreating it, making sure I set up the OAuth details on initial creation of it. I haven’t tried going back in and editing the GPT definition after the fact to see if that breaks it, but now I have my GPT using Google Authentication to authenticate the user with my API and then use a JWT for ongoing user based authentication - it’s really cool.
Thanks for explaining in detail. I tried a similar approach connecting with Azure B2C using Dotnet API. The authentication flow invokes the authorize endpoint followed by the intermediate endpoint, but it is not invoking the token endpoint. From the logs, I could see that the intermediate endpoint is redirecting to the correct openai callback url along with the code and state
In the UI, the error message says “Missing access_token” without invoking the token endpoint.
Missing access_token
Response from api
“success”: false,
“error”: “Missing access_token”,
I’m encountering the same issue, I’m using Auth0. Initially, when I set my server URL to match Auth0’s authorize URL, the login functioned correctly but thats of no use as i was using auth0 url in server url so can’t do api calls. However, when I attempted to use TinyURL, I started receiving a ‘missing access token’ error. So basically authorization fails. Please help if anyone has solution to this
@getinference - So what did you ultimately end up supplying as your auth, token, and scope? You mentioned utilized a URL shortener - but this is not apparent in the information you’ve shared, suggesting you moved on from that approach.
"securitySchemes": {
"BlizzardOAuth2": {
"type": "oauth2",
"flows": {
"clientCredentials": {
"authorizationUrl": "https://oauth.battle.net/authorize",
"tokenUrl": "https://oauth.battle.net/token",
"scopes": {
"wow.profile": "Access to a user's World of Warcraft characters"
}
}
}
}
When I save the OAUTH metadata in the CustomGPT “Authorization” component, it indeed throws the error about root domain needing to match. I feel I’m missing something simple. Does anything come to mind?
(to be clear: I realize that the url in servers is using us.api.blizzard.com and likely the issue; but just not sure how to overcome it given the information in this thread so far)
Yes then the action editor modifies the callback URL without telling you that it did so. Rather annoying. I have not pinpointed exactly when the change occurs – I think if you edit the OpenAPI spec or you edit the oAuth2 config then it regenerates a different redirect URI.
@knaledge I think what he may have meant is just take all the URLs you are using and use bit.ly or some common service to generate the https bit.ly/ equivalent. So then when you paste in the bit.ly URLs into the OpenAPI spec, their domains are all bit.ly but redirect to the right ones obviously. I am surprised this worked for him but may as well give it a go.
Please don’t use URL shorteners for critical and secure auth information—I doubt most of them will forward on the parameters you need anyway, as people in this thread are discovering. I believe in yesterday’s update OpenAI added support for multiple domains—here’s what the email says but it’s scant on details:
I agree about shorteners, as I considered it purely experimental as workarounds go.
Nonetheless, you can see the schema I’m using (and its content) - no go. I still get the “root domain” mismatch error. What do you suspect is necessary?
I’m not sure securitySchemes works right now, have you tried putting that information inside the ‘Authentication’ part of the GPT Editor UI? (You can find that underneath your schema editor)